Data from AWS/S3 after 2 years

2 years back : Tried out S3 with various options 

3 months back : Re-enabled S3 on account and got billed unnecessarily

today : Realized that there were 40k+ log files each of size close a KB lingering in the S3 Bucket

now : running the command on the bitgeek.in assuming it would help me with the lantencies ( compared to requests from/to India to US ) 

s3cmd rb —recursive s3://BUCKET (via StackOverflow)

now + few hours : Free from the frustration of saving log (access) files.

What can be done better here ? : S3 Should introduce ‘Batch Calls’ and ‘Log Clean up’ mechanisms . The least I could add here is , latencies suck . There should be a way to get this done at the server side. May be its way faster for users of EC2 which might share the same geographical location.