Previously I highlighted the release of an exploit to elastic search that results in the ability to execute unauthorized code on a server running elasticsearch 1.1.x. It has just been reported that this same exploit is now being used to install DDOS (distributed denial of service) bots on vulnerable machines hosted within AWS. Elasticsearch instances should always be treated like a database and not be directly exposed to the internet. As a minimum you should be using plugins to nginx to get JSON functionality direct from the web server and have it act as a proxy to back end processes like elastic search.
I thought I’d share how I set up Nginx to proxy a private S3 bucket.
I wanted to be able to password protect the contents of a bucket and without allowing any owner information of the bucket from leaking to the web user.
A simple configuration can be used if you want to serve objects that are public:
location ~* ^/s3/(.*) { resolver 172.31.0.2 valid=300s; resolver_timeout 10s; set $s3_bucket 'your_bucket.s3.amazonaws.com'; set $url_full '$1'; proxy_http_version 1.
AWS Management Portal for vCenter enables you to manage your AWS resources using VMware vCenter. The portal installs as a vCenter plug-in within your existing vCenter environment. Once installed, it enables you to migrate VMware VMs to Amazon EC2 and manage AWS resources from within vCenter. The AWS resources that you create using the portal will be located in your AWS account, even though they have been created using vCenter.
I’d like to cover some information about how you can test puppet modules. I’ve seen a lot of companies creating puppet modules and testing them via direct deployments onto machines. Or worst yet making changes to manifests without any testing at all. The world of puppet testing can seem quite daunting but with the following paragraphs I hope to show you that you can make some very small changes to how you develop puppet modules that will hopefully save you from some very bad situations.
In a previous post I wrote about how we can use Auto-Scaling Groups (ASG’s) to quickly adapt to user load. In this post I intend to explain a method to create custom Amazon Machine Image’s (AMI’s) using a project called packer.io
what is packer? Packer is an open source tool for creating identical machine images for multiple platforms from a single source configuration. Packer is lightweight, runs on every major operating system and is able to create machine images for multiple platforms in parallel.