- Compatible XF Versions
- 2.0
- 2.1
- 2.2
- 2.3
XenForo Version Compatibility
The following guide will work in either XF 2.0 or XF 2.1. You need to ensure you are using the correct version of the add-on files.
Both are included when you click the download button, please ensure you download version 2.0.x if you are using XenForo 2.0.x and please ensure you download version 2.1.x if you are using XenForo 2.1.x.
Why this guide?
Since XenForo 2.0.0 we have supported remote file storage using an abstracted file system named Flysystem. It's called an abstracted file system as it adds a layer of abstraction between the code and a file system. It means that it provides a consistent API for performing file system operations so that whether the file system is a local disk-based file system or a distributed and remotely accessible file system, our code calls the same functions and Flysystem takes care of the rest.
As useful as that is, it isn't the most obvious or straightforward thing to set up so this guide and accompanying add-on will help.
So, if you're planning to make use of the video uploads function in XF 2.1 and you're worried about increased disk space requirements, or even if you're staying with 2.0 for a while and you just need to offload your storage requirements elsewhere, this will help.
Making the required files available
Although it is possible for you to download the files and set up things like the autoloader yourself, you will probably prefer to simply download the add-on that is attached to this resource. You can install the add-on in the usual fashion.
Before you start
If you're setting this up on an existing site, you will need to manually move your existing files over. There's a section about that at the end. While you are moving the existing files, setting things up and testing, we recommend closing the forum first.
Autoloading the AWS SDK (for XenForo 2.0.x only)
The first thing to do is set up the autoloading of the AWS SDK.
Open your src/config.php file.
There are a number of different vendor packages involved and provided by the add-on attached to this resource, the following lines ensure they are all autoloaded. We are doing this here because we need the files to be available as early as possible in the request:
Now we're ready to set up a specific implementation. Look below for the Setting up DigitalOcean Spaces guide or skip ahead for the Setting up Amazon S3 guide.
Setting up DigitalOcean Spaces
We'll cover this first as it is the most straightforward to set up. If you'd prefer to use Amazon S3 skip ahead to the Setting up Amazon S3 section below.
Now we need to create some API credentials. To do this:
Configuring XF to use DigitalOcean Spaces
We now need to configure XF to use DigitalOcean Spaces for file storage. We'll start with what usually goes into the data directory first. This generally includes attachment thumbnails and avatars.
Open your src/config.php file.
First we need to configure the Amazon S3 client (the DigitalOcean Spaces API is compatible with the Amazon AWS SDK).
We will do this using a closure so that we can reuse the same code and we only have to type it out once:
Note that the key and secret are what you noted down after setting up the "Spaces access key" earlier. The region can be inferred from the endpoint URL you noted down earlier. It's the part after the first . in the URL, in my case it is ams3. The endpoint is the same endpoint URL minus the unique name you chose.
Next we need to set up the actual Flysystem adapter to use the S3 client:
Finally, we need to ensure that avatar and attachment thumbnail URLs are prepended with the correct URL. This requires the endpoint URL you noted down earlier, again:
At this point, everything should be working in terms of new uploads. Don't be alarmed if you notice that avatars and thumbnails are missing; if you have existing files, they will need to be moved over manually which we'll go through later.
First, we need to test that the configuration works. Simply go and upload a new avatar. The avatar will now be stored and served remotely!
If you check your DigitalOcean Spaces account now, you should see that new folders have been created containing your new avatar:
Success! But we're only half way there!
We now need to add support for the internal_data directory stuff too. Generally, this is attachments and any other stuff that should be "private". Back to config.php and the code to add is very similar:
Now try to upload an attachment to a post and, much like before, you should now see additional files and folders in your Spaces file browser.
Setting up Amazon S3
Configuring XF to use Amazon S3
We now need to configure XF to use Amazon S3 for file storage. We'll start with what usually goes into the data directory first. This generally includes attachment thumbnails and avatars.
Open your src/config.php file.
We will do this using a closure so that we can reuse the same code and we only have to type it out once:
Note that the key and secret are what you noted down after setting up the IAM user earlier. The region can be inferred from the S3 endpoint URL.
Next we need to set up the actual Flysystem adapter to use the S3 client:
Finally, we need to ensure that avatar and attachment thumbnail URLs are prepended with the correct URL:
At this point, everything should be working in terms of new uploads. Don't be alarmed if you notice that avatars and thumbnails are missing; if you have existing files, they will need to be moved over manually which we'll go through later.
First, we need to test that the configuration works. Simply go and upload a new avatar. The avatar will now be stored and served remotely!
If you check your bucket file browser now, you should see that new folders have been created containing your new avatar:
Success! But we're only half way there!
We now need to add support for the internal_data directory stuff too. Generally, this is attachments and any other stuff that should be "private". Back to config.php and the code to add is very similar:
Now try to upload an attachment to a post and, much like before, you should now see additional files and folders in your bucket file browser.
Moving existing files to DigitalOcean Spaces or Amazon S3
So, you now have remotely hosted files. At least, you do from this point onwards. But what about all of the existing files you have?
Thankfully there are several ways to interact with Spaces and S3 in order to make moving your existing content over very easily. Although this is a one-time operation, depending on the number and size of the files, it could take a significant amount of time.
There are a few ways to manage this process, but arguably the best approach is to use a tool by the name of s3cmd which is a popular cross-platform command-line tool for managing S3 and S3-compatible object stores.
It should be possible whether you are using Spaces or S3 to install the s3cmd tool on your server and run the commands to copy the files across to their new home.
Rather than rehashing something that has already been written, I'll leave you with the following guide from DigitalOcean which goes through how to migrate your existing files using s3cmd.
s3cmd 2.x Setup :: DigitalOcean Product Documentation
s3cmd is a popular cross-platform command-line tool for managing S3 and S3-compatible object stores. To use s3cmd, you will need: s3cmd version 2.0.0+ or higher. You can check your version with s3cmd --version. Versions from package managers may be out of date, so we recommend using the s3cmd...
Note: When copying your existing data files across, they will need to be made public. You can do this by setting the ACL to public while copying:
s3cmd 2.x Usage :: DigitalOcean Product Documentation
s3cmd is a popular cross-platform command-line tool for managing S3 and S3-compatible object stores. Once you’ve set up s3cmd, you can use it to manage your Spaces and files. If you’re using an alternative configuration file, you must you must explicitly provide it at the end of each command by...
The following guide will work in either XF 2.0 or XF 2.1. You need to ensure you are using the correct version of the add-on files.
Both are included when you click the download button, please ensure you download version 2.0.x if you are using XenForo 2.0.x and please ensure you download version 2.1.x if you are using XenForo 2.1.x.
Why this guide?
Since XenForo 2.0.0 we have supported remote file storage using an abstracted file system named Flysystem. It's called an abstracted file system as it adds a layer of abstraction between the code and a file system. It means that it provides a consistent API for performing file system operations so that whether the file system is a local disk-based file system or a distributed and remotely accessible file system, our code calls the same functions and Flysystem takes care of the rest.
As useful as that is, it isn't the most obvious or straightforward thing to set up so this guide and accompanying add-on will help.
So, if you're planning to make use of the video uploads function in XF 2.1 and you're worried about increased disk space requirements, or even if you're staying with 2.0 for a while and you just need to offload your storage requirements elsewhere, this will help.
Making the required files available
Although it is possible for you to download the files and set up things like the autoloader yourself, you will probably prefer to simply download the add-on that is attached to this resource. You can install the add-on in the usual fashion.
Before you start
If you're setting this up on an existing site, you will need to manually move your existing files over. There's a section about that at the end. While you are moving the existing files, setting things up and testing, we recommend closing the forum first.
Autoloading the AWS SDK (for XenForo 2.0.x only)
The first thing to do is set up the autoloading of the AWS SDK.
Open your src/config.php file.
There are a number of different vendor packages involved and provided by the add-on attached to this resource, the following lines ensure they are all autoloaded. We are doing this here because we need the files to be available as early as possible in the request:
Code:
\XFAws\Composer::autoloadNamespaces(\XF::app());
\XFAws\Composer::autoloadPsr4(\XF::app());
\XFAws\Composer::autoloadClassmap(\XF::app());
\XFAws\Composer::autoloadFiles(\XF::app());
Now we're ready to set up a specific implementation. Look below for the Setting up DigitalOcean Spaces guide or skip ahead for the Setting up Amazon S3 guide.
Setting up DigitalOcean Spaces
We'll cover this first as it is the most straightforward to set up. If you'd prefer to use Amazon S3 skip ahead to the Setting up Amazon S3 section below.
- Go to the DigitalOcean Cloud page and sign up or log in.
- At this point, if you're new to DigitalOcean, you may need to set up billing.
- You will now be able to create a new project.
- Click the "Start using Spaces" link.
- Choose your datacenter region (I have chosen Amsterdam).
- Leave "Restrict File Listing" selected.
- Choose a unique name (I have chosen "xftest")
- Click "Create a space"
Now we need to create some API credentials. To do this:
- Click "Manage" in the left sidebar.
- Click "API".
- In the "Spaces access keys" section click "Generate New Key".
- Type a name for the key (Again, I have chosen "xftest") and save.
Configuring XF to use DigitalOcean Spaces
We now need to configure XF to use DigitalOcean Spaces for file storage. We'll start with what usually goes into the data directory first. This generally includes attachment thumbnails and avatars.
Open your src/config.php file.
First we need to configure the Amazon S3 client (the DigitalOcean Spaces API is compatible with the Amazon AWS SDK).
We will do this using a closure so that we can reuse the same code and we only have to type it out once:
Code:
$s3 = function()
{
return new \Aws\S3\S3Client([
'credentials' => [
'key' => 'ABC',
'secret' => '123'
],
'region' => 'ams3',
'version' => 'latest',
'endpoint' => 'https://ams3.digitaloceanspaces.com'
]);
};
Note that the key and secret are what you noted down after setting up the "Spaces access key" earlier. The region can be inferred from the endpoint URL you noted down earlier. It's the part after the first . in the URL, in my case it is ams3. The endpoint is the same endpoint URL minus the unique name you chose.
Next we need to set up the actual Flysystem adapter to use the S3 client:
Code:
$config['fsAdapters']['data'] = function() use($s3)
{
return new \League\Flysystem\AwsS3v3\AwsS3Adapter($s3(), 'xftest', 'data');
};
Finally, we need to ensure that avatar and attachment thumbnail URLs are prepended with the correct URL. This requires the endpoint URL you noted down earlier, again:
Code:
$config['externalDataUrl'] = function($externalPath, $canonical)
{
return 'https://xftest.ams3.digitaloceanspaces.com/data/' . $externalPath;
};
At this point, everything should be working in terms of new uploads. Don't be alarmed if you notice that avatars and thumbnails are missing; if you have existing files, they will need to be moved over manually which we'll go through later.
First, we need to test that the configuration works. Simply go and upload a new avatar. The avatar will now be stored and served remotely!
If you check your DigitalOcean Spaces account now, you should see that new folders have been created containing your new avatar:
Success! But we're only half way there!
We now need to add support for the internal_data directory stuff too. Generally, this is attachments and any other stuff that should be "private". Back to config.php and the code to add is very similar:
Code:
$config['fsAdapters']['internal-data'] = function() use($s3)
{
return new \League\Flysystem\AwsS3v3\AwsS3Adapter($s3(), 'xftest', 'internal_data');
};
Now try to upload an attachment to a post and, much like before, you should now see additional files and folders in your Spaces file browser.
Setting up Amazon S3
- Go to the AWS Management Console page and sign up or log in.
- In the "AWS services" section type "S3" to go to the "S3 Console".
- Click "Create bucket".
- Choose a bucket name (I have chosen xftest).
- Choose a region (I have chosen EU London).
- Accept any further default options until the bucket is created.
- You now need to go to the "IAM" console.
- Click "Add user".
- Pick a username (yep, I used xftest again ).
- Set the access type to "Programmatic".
- To set permissions, click the "Attach existing policies directly" tab followed by the "Create policy" button.
- IAM and the various policies and permissions can be fairly daunting. We can make it a bit easier, though you may have different requirements. On this page there is a tab called "JSON". Paste the following in there, replacing YOUR-BUCKET-NAME with the bucket name you chose earlier:
Code:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetObject",
"s3:GetObjectAcl",
"s3:putObject",
"s3:putObjectAcl",
"s3:ReplicateObject",
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::YOUR-BUCKET-NAME",
"arn:aws:s3:::YOUR-BUCKET-NAME/*"
]
}
]
}
- Click "Review policy" give it a name and save.
- Go back to the previous "Add user" page, click the "Refresh" button and search for the policy you just created.
- Click "Next", followed by "Create user".
Configuring XF to use Amazon S3
We now need to configure XF to use Amazon S3 for file storage. We'll start with what usually goes into the data directory first. This generally includes attachment thumbnails and avatars.
Open your src/config.php file.
We will do this using a closure so that we can reuse the same code and we only have to type it out once:
Code:
$s3 = function()
{
return new \Aws\S3\S3Client([
'credentials' => [
'key' => 'ABC',
'secret' => '123'
],
'region' => 'us-east-1',
'version' => 'latest',
'endpoint' => 'https://s3.eu-west-2.amazonaws.com'
]);
};
Note that the key and secret are what you noted down after setting up the IAM user earlier. The region can be inferred from the S3 endpoint URL.
Next we need to set up the actual Flysystem adapter to use the S3 client:
Code:
$config['fsAdapters']['data'] = function() use($s3)
{
return new \League\Flysystem\AwsS3v3\AwsS3Adapter($s3(), 'xftest', 'data');
};
Finally, we need to ensure that avatar and attachment thumbnail URLs are prepended with the correct URL:
Code:
$config['externalDataUrl'] = function($externalPath, $canonical)
{
return 'https://xftest.s3.eu-west-2.amazonaws.com/data/' . $externalPath;
};
At this point, everything should be working in terms of new uploads. Don't be alarmed if you notice that avatars and thumbnails are missing; if you have existing files, they will need to be moved over manually which we'll go through later.
First, we need to test that the configuration works. Simply go and upload a new avatar. The avatar will now be stored and served remotely!
If you check your bucket file browser now, you should see that new folders have been created containing your new avatar:
Success! But we're only half way there!
We now need to add support for the internal_data directory stuff too. Generally, this is attachments and any other stuff that should be "private". Back to config.php and the code to add is very similar:
Code:
$config['fsAdapters']['internal-data'] = function() use($s3)
{
return new \League\Flysystem\AwsS3v3\AwsS3Adapter($s3(), 'xftest', 'internal_data');
};
Now try to upload an attachment to a post and, much like before, you should now see additional files and folders in your bucket file browser.
Moving existing files to DigitalOcean Spaces or Amazon S3
So, you now have remotely hosted files. At least, you do from this point onwards. But what about all of the existing files you have?
Thankfully there are several ways to interact with Spaces and S3 in order to make moving your existing content over very easily. Although this is a one-time operation, depending on the number and size of the files, it could take a significant amount of time.
There are a few ways to manage this process, but arguably the best approach is to use a tool by the name of s3cmd which is a popular cross-platform command-line tool for managing S3 and S3-compatible object stores.
It should be possible whether you are using Spaces or S3 to install the s3cmd tool on your server and run the commands to copy the files across to their new home.
Rather than rehashing something that has already been written, I'll leave you with the following guide from DigitalOcean which goes through how to migrate your existing files using s3cmd.
s3cmd 2.x Setup :: DigitalOcean Product Documentation
s3cmd is a popular cross-platform command-line tool for managing S3 and S3-compatible object stores. To use s3cmd, you will need: s3cmd version 2.0.0+ or higher. You can check your version with s3cmd --version. Versions from package managers may be out of date, so we recommend using the s3cmd...
Code:
s3cmd 2.x Setup :: DigitalOcean Product Documentation
s3cmd is a popular cross-platform command-line tool for managing S3 and S3-compatible object stores. To use s3cmd, you will need: s3cmd version 2.0.0+ or higher. You can check your version with s3cmd --version. Versions from package managers may be out of date, so we recommend using the s3cmd...
www.digitalocean.com www.digitalocean.com
Note: When copying your existing data files across, they will need to be made public. You can do this by setting the ACL to public while copying:
s3cmd 2.x Usage :: DigitalOcean Product Documentation
s3cmd is a popular cross-platform command-line tool for managing S3 and S3-compatible object stores. Once you’ve set up s3cmd, you can use it to manage your Spaces and files. If you’re using an alternative configuration file, you must you must explicitly provide it at the end of each command by...
Code:
s3cmd 2.x Usage :: DigitalOcean Product Documentation
s3cmd is a popular cross-platform command-line tool for managing S3 and S3-compatible object stores. Once you’ve set up s3cmd, you can use it to manage your Spaces and files. If you’re using an alternative configuration file, you must you must explicitly provide it at the end of each command by...
www.digitalocean.com www.digitalocean.com