Single Line Web2Web Copy  Cloud Function Using GNU/Linux Commands
Image Credits: Oracle & DALL·E 3

Single Line Web2Web Copy Cloud Function Using GNU/Linux Commands

Disclaimer: This is just a POC. Do not use for production use without error handling and request filtering.

I wanted to find an easy way to copy a web hosted file to another web location via HTTP PUT method, and also have the ability to trigger it via an API call for a proof of concept (POC). After trying and failing multiple times with two different programming languages, I decided to write a simple solution using just 4 GNU/Linux commands.

My example architecture

As you can see, what I basically want is to download a file from example1.com and upload it to example2.com as per the given name on the API request. The API call initiator will be a job server that resides either in the cloud or on-premises, and it will send an API request with a JSON payload.

Above diagram shows my example architecture. If you are not familiar with cloud functions, please watch this video first

Example JSON Payload

Let's Do this

1st of all you must create 2 mandatory files required by Oracle Cloud Functions

  1. func.yaml : manifest about the function. Which includes written language or wrapper method.

  2. Dockerfile : just docker file :-) with hotwrap (https://github.com/fnproject/hotwrap) injection and your code

Source Codes

func.yaml

Dockerfile

how to test my 1 line code ?

execute following code on your cloud shell or your GNU/Linux Shell (need jq and curl command). Please note that I have removed | sh and rm -rf /tmp/* from the testing command.

it should print something like following. Final | sh command on the Dockerfile is basically to execute commands via sh.

What are the limitations ?

As per the https://docs.oracle.com/en-us/iaas/Content/Functions/Tasks/functionsaccessinglocalfilesystem.htm, you can only write to /tmp mount path and max file size limit is 512MB. And you must allocate 2GB of RAM to this Function to have this max limit. So the default limit for /tmp would be just 32MB.

Note that the /tmp directory might be shared by successive invocations of the function. A file written by an earlier invocation of a function could still exist when the function is invoked a second time. (That is why I have rm -rf /tmp/* on my Dokerfile)

What I have left to you ?

This function returns noting about the success or failed status of the GNU/Linux command execution. (you can get the exit status from shell by executing $? and use if command to do something and return json alike payload back)

Enjoy :-)

To view or add a comment, sign in

Others also viewed

Explore topics