Sep 4, 2008 at 6:00 AM
Edited Sep 4, 2008 at 6:26 AM
Thanks John this is really helpful for my application.  I think as folks built applications something like this will be very useful avoiding the browser only paradigm.$0$0A couple of questions:$0Would it be possible to have a multiple file upload in the form of "file:*.mp4"?$0What happens if the connection is lost?  Will it keep trying?$0Is the error: "ERROR: 403 Forbidden" unique for unavailable bucket?$0Do you have a list of returns for different conditions?$0$0For my application the only other things I would like (or at least that I can think of now) would be:  $0A command that returns the S3 structure. (But I explicitly don't want the files as there are hundreds in each terminating folder.)$0bucket1/folder1$0bucket1/folder2/subfolder1$0bucket2$0bucket3$0$0$0And a command that deletes a bucket or a folder/subfolder in a bucket along with all its subtending folders and files. $0$0And maybe an option for simple return codes that are easy to parse and decode. (low priority)$0$0This is just excellent!!!! $0$0
Sep 5, 2008 at 8:57 AM
wow what happened to the formatting?  Is there an option for EU storage?
Sep 5, 2008 at 11:51 PM
I notice that if I specify an unavailable bucket without  "overwrite:true" set I get a reasonable error message "ERROR: 403 Forbidden".  However if I choose somebody else's bucket with "overwrite:true" I don't get that error message. (But, of course, don't get the "Put-object Complete" either.)
Sep 7, 2008 at 5:58 PM
Edited Sep 7, 2008 at 8:39 PM
Keep those suggestion coming - you're the first beta tester on this one!

Just uploaded a new version ( that supports two new actions put-objects and delete-objects.  Also added an option to create the buckets in eu, and fixed that hidden error msg bug you described above.

By the way, this is just a very thin wrapper around the S3 api, so you'll see the return messages that S3 returns - 403s happen for a variety of reasons.

Also operations will be retried 3 times before failing (since that is the default behavior of the S3 client in the resourceful lib).

Sep 7, 2008 at 8:36 PM
Thanks John.  I'll play with it this week.  I think the version is right?  How about the idea of a list-objects action which returns the object structure bucket/folders (and I guess optionally the files too) otherwise one would have to manually remember what objects there are to use the delete-action.  Also just so I'm understanding the syntax/terminology.  Is key-prefix used when files aren't directly put into the root bucket but some subtending folder and if so if you just put a file name in the key-prefix argument isn't that the same as putting it in the key argument?
Sep 7, 2008 at 8:57 PM
S3 has no support for subfolders, it just has buckets (root-level folders) and keys in those buckets.

SpaceBlock (and other tools like S3Fox, etc) simulate subfolders by taking advantage of the fact that key-names can have slashes '/' in them.  e.g.


In SpaceBlock, a.txt and b.txt would appear in the "bucket-folder" and folder1 would appear as a subfolder of the "bucket-folder".

I am deliberately keeping this command-line tool closer to S3, rather than SpaceBlock, so it's useful to ppl that only know S3.  It should not be too difficult to map folders you're expecting to bucket names and key-prefixes, now that you know the above.  e.g. deleting all contents of folder1 would be accomplished by setting key-prefix:folder1/

Hope that helps!
- John

Sep 7, 2008 at 9:13 PM
Aw... That helps. (and reveals my ignorance. ;-)

Sep 8, 2008 at 9:28 AM
OK.  It all seems to work like a champ for my needs but since I don't have a EU account I'm not sure about using the EU data center.  This leads me to another question.  I notice that I can use location:US or location:EU and my US account receives my put-object.  I assume that in addition to the location parameter that S3 must route based on keys which must be associated to a data center. But if that was true then I suppose you wouldn't need to specify it.  So I'm not understanding this completely. thx.
Sep 8, 2008 at 2:52 PM
The way EU storage works in S3 is at the bucket level.  You can specify a datacenter location (US or EU) when you create the bucket - no need to have an account residing in EU.   Once a bucket is created, all object (keys) in the bucket will reside in the same datacenter as the bucket.

I added the location parameter to put-object(s) since there is a create-bucket parameter, in case you want to specify where the new bucket should be created if it does not already exist. 

This is kind of confusing though, I think in the next version I'll remove the optional create-bucket and location parameters from put-object(s) actions and create a new action called create-bucket with a location parameter to make it more explicit.

Sep 9, 2008 at 1:59 AM
I just ran into a problem.  I did all my original testing on a Vista machine which worked fine.  Now I'm trying it on an XP machine and get the following Application Error:
"The application failed to initialize properly (0x0000135)"  I'll try it on another machine but just wanted to let you know. (This would be a first for me to have something work on Vista but not XP. ;-)

Also I don't mind having another action but I like it as it is as well as I now just use one action to do both (if necessary.)
Sep 9, 2008 at 11:44 PM
Works on my XP laptop.  Doing a search I find that I just need to add the .net Framework to my old XP machine.  I love this utility.
Sep 9, 2008 at 11:50 PM
Edited Sep 10, 2008 at 12:01 AM
Beat me to the punch.  It requires at least .net 2.0 which can be downloaded via windows update or separately here:   Congratulations on either a very clean or a very old xp machine - it's hard to find a machine without .net 2 at this point, it's required by several other programs out there...

So I'll keep those create-bucket and location parameters if you're using them, and simply add an additional action called create-bucket in the next version.
Sep 10, 2008 at 5:23 AM
Thanks.  I've got something that seems bizarre. 

While testing failure legs I executed the following:

"H:\gamedayfilmz playpen\bin\rescmd" s3 put-object aws-key:1KPC0TVBAF4JHCWPHG82 aws-secret:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx  file:"H:\gamedayfilmz playpen\bin\index.html"  bucket:"abc" key-prefix:"fred/" location:US create-bucket:true overwrite:true acl:public-read

and since I don't own bucket 'abc' and figuring someone else did that it would return and invalid bucket response but to my surprise it was a successful transfer.  Not believing something like 'abc' was not already taken I opened SB to take a look.  And sure enough it doesn't show up in my account.  I assume that you only report what is returned to you but in any case this consistently pass as valid.

H:\>"H:\gamedayfilmz playpen\bin\rescmd" s3 put-object aws-key:1KPC0TVBAF4JHCWPHG82 aws-secret:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx  file:"H:\gamedayfilmz playpen\bin\index.html"  bucket:"abc" key-prefix:"fred/" location:US create-bucket:true overwrite:true acl:public-read
Putting file [H:\gamedayfilmz playpen\bin\index.html] into bucket [abc] key [fred/index.html] with acl [public-read]...
Put-object complete.  Sent 1 bytes

Sep 10, 2008 at 7:21 AM
Any possibility that the delete-objects action could have a wild card option such as "... key:*.mp4 key-prefix:someprefix/"  ?