Increasingly, customers are also seeking to enable integration paths for their data so they can do more with it, such as building data lakes to gain deeper insights through analytics. Customers are looking for cost optimized and operationally efficient ways to store and access their data. The sources of this data range from traditional sources like user or application-generated files, databases, and backups, to machine generated, IoT, sensor, and network device data. This is just our approach to running an FTP server in AWS.An increasing amount of data is being generated and stored each day on premises. I haven't quite tested a disaster scenario yet, but I'm sure AWS will schedule some maintenance at some point and force us to terminate our instance and watch it recover :-) Of course, it's not quite seamless as users had to update their hostname, but at least we gave them a couple of months to do it prior to the migration.įor the credentials bit, we had access to the usernames and hashed passwords on the old FTP server so we simply migrated them across to the new FTP server.Īpart from a couple of minor issues, it has been probably the smoothest change I've done in a long time. ![]() When it was time to start using the new FTP server, we simply updated the DNS record for the new hostname to point at the new FTP server and voilá! They started coming in. This new hostname is actually a CNAME to the old legacy FTP server hostname. So, a few months ago we contacted all our users and asked them to update their settings to point to a new hostname. We couldn't simply change the existing DNS record because it would affect all those other users. Hostname and credentials stay the sameĪs we migrate across to the new FTP server in AWS, we do not want users to have to change the FTP server nor their credentials because it's an annoyance and users don't like to be annoyed :-) How did we tackle this?įor the hostname, it is worth mentioning that the old FTP server we migrated from is actually a shared FTP server amongst different business units. It would, of course, also install the cron job to start snapshotting again. If the instance were to die, a new instance would take the latest snapshot and create a new volume out of it, and mount it. Also, we can handle a few minutes of downtime while a new instance is brought up.įor the user accounts, we use Puppet so when the instance comes up, Puppet will ensure that the user accounts are created.įor the FTP data, we actually use a separate EBS volume and the instance is set-up to snapshot it every few minutes. You don't want to lose the FTP data if the instance terminates, nor the user accounts. Touching on the previous point, the ASG is set-up as min=1 max=1. This ensures that, if the instance died for whatever reason, it will come back up and although the internal IP may be different, the EIP will be the same. Our solution to this problem is to set the FTP server within an ASG (AutoScaling Group) of min=1 max=1, and as part of the boot-up sequence, it will auto-assign itself the EIP (Elastic IP) which is passed in as a CloudFormation parameter. It would be extremely frustrating if we had to update a DNS record every time the IP address for the FTP server changes. The EIP (Elastic IP) must not change, ever. We also specified a range of passive ports, which are enabled in the inbound list of ports within the security group for the FTP server. The FTP daemon was configured to enable both passive and active mode, and a fixed PASV address, which is the EIP (Elastic IP) of the server.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |