Fix: s3fs Permission issue on remote folders

This article was published 4 years ago. Due to the rapidly evolving world of technology, some concepts may no longer be applicable.

I had a use for s3fs today, so off I went to it’s github repo and installed it using the instructions there (which worked perfectly).

Once it was installed it wanted me to put my “S3 identity and credential in a file”. I eventually worked out that this meant my access key and secret key, concatenated with a colon and no quotes.

This allowed me to access S3 with the following line:

s3fs ds-cli s3 -o passwd_file=s3-credentials

Now that I could access s3fs I tried to access one of my buckets, but found that there was an error. The issue was that I didn’t have read or execute permission on the folders

$ ls my-folder
ls: cannot open directory my-folder: Operation not permitted

$ ls -l
total 1
d--------- 1 root root 0 Jan  1  1970 my-folder/

Running s3fs with a umask of 022 set the read and write bits correctly, and I could see my data fine (I also added a cache folder):

s3fs ds-cli s3 -o passwd_file=s3-credentials -o use_cache=/tmp -o umask=0022

As an added bonus, here’s the command to unmount your share:

fusermount -u /path/to/mount

Thoughts on this post

Thanks. This is the only correct answer on the Internet as far as I can tell!

Chris Embree 2016-08-17

Yes, this is a correct answer. Assuming the AWS correctly handles permissions, I’d even suggest umask=00000 and let Access Key permissions rule. I’m allowing users to mount using their own keys in this case.

Plus this up on SEO Google, it’s correct and hard to find.

Serge 2017-04-05

THANK YOU!!! Sheez, I was running into a brick wall with this. Where was the documentation on that???

Roland Deveraux 2017-09-04

You sir are a king amongst kings.

Jason M 2018-03-15

Thanks ~ wish I’d found this a couple hours faster but glad it was here in the end!

Laine Mikael 2018-09-26

Thanks! This helped me to fix my “Operation not permitted” issue (umask=0022)

Leave a comment?