AmazonS3write()
Amazon S3: Copies the local file up to Amazon S3
Usage
BOOLEAN = AmazonS3write(
datasource,
bucket,
key,
file,
storageclass,
metadata,
data,
mimetype,
retry,
retrywaitseconds,
deletefile,
background,
callback,
callbackdata,
acl,
aes256key,
customheaders
)
Argument | Summary |
---|---|
datasource | Amazon datasource |
bucket | Amazon S3 bucket |
key | full S3 key |
file | path to local file to send. If not supplied the 'data' attribute will be used as the data |
storageclass | storage class to store this object; Standard or ReducedRedundancy [optional] |
metadata | structure of data that will be stored with the object. Available via AmazonS3GetInfo() or any HTTP header call to the object [optional] |
data | variable with the object data. Cannot be used with 'file'. If not a string or a binary, it will be encoded into JSON and stored application/json [optional] |
mimetype | mimetype of the data. if not supplied will attempt to guessestimate the mimetype [optional] |
retry | number of times to retry before giving up; defaults to 1 [optional] |
retrywaitseconds | number of seconds to wait before retrying; defaults to 1 [optional] |
deletefile | if file, then will delete the file after successfully uploading [optional] |
background | flag to determine if this upload goes to a background process, returning immediately; defaults to false. Only for use with file attribute [optional] |
callback | if background=true the method, onAmazonS3Write(file,success,callbackdata,error), will be called on the CFC passed in [optional] |
callbackdata | a string that will be passed on through to the callback function; can be any string [optional] |
acl | ACL to set: private | public-read | public-read-write | authenticated-read | bucket-owner-read | bucket-owner-full-control | log-delivery-write [optional] |
aes256key | optional AES256 key for Amazon to encrypt the file at rest, using the specified key, in Base64. If you write the file using encryption, then you have to supply the same key in the AmazonS3Read() function to read the file. To create a AES key use the OpenBD function: GenerateSecretKey('aes',256) [optional] |
customheaders | a map of custom headers to pass along side the request. This is not metadata. For example Cache-Control [optional] |
Calling
Supports named-parameter calling allowing you to use the function like:
AmazonS3write( datasource=?, bucket=?, key=?, file=?, storageclass=?, metadata=?, data=?, mimetype=?, retry=?, retrywaitseconds=?, deletefile=?, background=?, callback=?, callbackdata=?, acl=?, aes256key=?, customheaders=? );
Supports passing parameters as a structure using ArgumentCollection:
AmazonS3write( ArgumentCollection={ datasource : ?, bucket : ?, key : ?, file : ?, storageclass : ?, metadata : ?, data : ?, mimetype : ?, retry : ?, retrywaitseconds : ?, deletefile : ?, background : ?, callback : ?, callbackdata : ?, acl : ?, aes256key : ?, customheaders : ? } );
Extra
The following example, will upload the file, 'lageFileToUpload.txt' in the background, attempting up to 3 times, with 10 seconds between each retry. If it succeeds to upload, then the file will be deleted from the file system. If it doesn't succeed, the file will still exist on the file system. The CFC, 'callbackcfc.cfc' will be loaded and the method 'onAmazonS3Write()' will be called. The CFC stub can be seen below.
<cfscript> AmazonRegisterDataSource( "amz", "--amazonkey--", "--amazonsecretkey--" ); AmazonS3Write( datasource="amz", bucket="mybucket", file="/tmp/largeFileToUpload.txt", key="/largeFileToUpload.txt", background=true, retry=3, retrywaitseconds=10, deletefile=true, callback="callbackcfc", callbackdata="ExtraDataToPassToCallbackCFC" ); </cfscript>
The CFC callback stub looks like:
<cfcomponent> <cffunction name="onAmazonS3Write"> <cfargument name="file" type="string"> <cfargument name="success" type="boolean"> <cfargument name="callbackdata" type="string"> <cfargument name="error" type="string"> <!--- do something ---> </cffunction> </cfcomponent>
A new instance of the CFC will be created for each callback, with the application scope being available for the same application that originated the AmazonS3Write() function call.
When you background an upload, the local file remains in its location, however a job file is written to the directory, 'amazons3uploader' in the OpenBD working directory (the place you find the 'bluedragon.log' file). Background jobs remain in place over server restarts, as long as this directory is not deleted. For every attempt, a log entry is made in the 'bluedragon.log' to track the progress.