Uploading and downloading product artifacts

Overview on uploading and downloading product artifacts

Product artifacts are files associated with your purchase and can include Datasets, additional files, and standalone software. Depending on your credentials, partners and users can upload and download product artifacts using cURL commands, AWS CLI, s3fs, and S3 compatible tools.

For partners uploading product artifacts, remember to upload metadata.json to cloud object storage after uploading artifacts.


Get pull secret

Create and copy a pull secret from My Account.

Procedure

  1. On the main menu, click your user name, click My Account, click Pull secrets, and then click Create pull secret.
  2. On the Pull secret name box, enter a unique name for your pull secret.
  3. To get your pull secret, on the Your pull secret box, click the Copy button (copy icon).
  4. Click Save.

Result

You copied your pull secret.

Next steps

Using your pull secret, generate access tokens to upload or download product artifacts.


Generate API key and HMAC credentials

Depending on the pull secret, credentials grant either read or read/write access to cloud object storage.

Prerequisites

Procedure

  • Using your pull secret, generate credentials using the following command.

    curl "https://marketplace.redhat.com/entitlement/v1/credentials?type=hmac" -H
    "Authorization: Bearer {pull_secret}"

Result

Credentials show in the response. For example:

{"data":{"credentials":{"apiKey":"API Key","accessKey":"Access Key"
ID","secretAccessKey":"Secret Access Key"}}}

Upload product artifacts using cURL

Depending on credentials, partners can upload individual product artifacts using cURL.

Prerequisites

Procedure

  1. To connect to cloud object storage, using your pull secret, generate an access token using the following command.

    curl -X "POST" "https://iam.cloud.ibm.com/oidc/token" \
    -H "Accept: application/json" \
    -H "Content-Type: application/x-www-form-urlencoded" \
    --data-urlencode "apikey=$(echo $(curl -s -k -X GET "https://marketplace.redhat.com/entitlement/v1/credentials" -H "Authorization: Bearer {pull_secret}") | sed -n 's|.*"apiKey":"\([^"]*\)".*|\1|p')" \
    --data-urlencode "response_type=cloud_iam" \
    --data-urlencode "grant_type=urn:ibm:params:oauth:grant-type:apikey"
  2. The command returns an object with multiple fields. To get your access token, in the access_token field, copy the value.

    For example:

    {
    "access_token":"{token}",
    "refresh_token":"not_supported",
    "token_type":"Bearer",
    "expires_in":3600,
    "expiration":1653343356,
    "scope":"ibm openid"
    }
  3. To upload files, using your access token, run the following command.

    curl -X "PUT" "https://s3.us.cloud-object-storage.appdomain.cloud/(bucket-name)/(file-path)" \
    -H "Authorization: Bearer (token)" \
    --data-binary @'<file_path_from_your_drive>'

    For example:

    curl -X "PUT" "https://s3.us.cloud-object-storage.appdomain.cloud/rhm-sand-edition-200321/sampledata/sampledata_file.csv" \
    -H "Authorization: Bearer (token)" \
    --data-binary @'/Users/Downloads/sampledata_file.csv'

    Note: On terminal, a successful upload returns to the command prompt with no error message showing.

Result

You uploaded files to partner storage. On terminal, a successful upload returns to the command prompt with no error message showing. Larger files will take more time to complete, so be patient.

Next steps

Complete the process by uploading metadata.json to cloud object storage.

Related links

For more commands, refer to Using cURL on IBM Cloud Object Storage.


Upload and download product artifacts using AWS CLI

Depending on credentials, partners and users can upload and download partner artifacts using AWS CLI. The official command-line interface for AWS is compatible with the IBM® Cloud Object Storage S3 API.

Install AWS CLI

AWS CLI is written in Python and can be installed from the Python Package Index, when Python and pip exists on the local or installing system. To install AWS CLI run the following command:

pip install awscliv2

Install on MacOS or Linux

Alternative to pip, users can install AWS CLI using the following options.

  • MacOS — Users can install AWS CLI using MacOS package installer. To install, download the latest MacOS package, double-click the downloaded file to launch the installer, and then complete the workflow.

  • Linux (Red Hat Enterprise Linux or CentOS) — Users can install AWS CLI using the command line. To install, run the following command:

    curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
    unzip awscliv2.zip
    sudo ./aws/install

Configure AWS CLI to connect to partner storage object

Prerequisites

Procedure

  • Using your HMAC credentials, create the following files: ~/.aws/credentials and ~/.aws/config. To create, on terminal, enter aws configure, and then update the following information:

    • AWS Access Key ID — the access key value returned when generating credentials
    • AWS Secret Access Key — the secret access key value returned when generating credentials
    • Default region name — default region name is us-standard
    • Default output format — default output format is .json

    For example:

    aws configure
    AWS Access Key ID [None]: {Access Key ID}
    AWS Secret Access Key [None]: {Secret Access Key}
    Default region name [None]: us-standard
    Default output format [None]: json

Result

Configuration is complete. The following files were created:

  • ~/.aws/credentials

    [default]
    aws_access_key_id = {Access Key ID}
    aws_secret_access_key = {Secret Access Key}
  • ~/.aws/config

    [default]
    region = us-standard
    output = json

Set HMAC credentials through environment variables

Optionally, you can set HMAC credentials using environment variables. To set, run the following command:

export AWS_ACCESS_KEY_ID="{Access Key ID}"
export AWS_SECRET_ACCESS_KEY="{Secret Access Key}"

Next steps

After configuring AWS CLI, run Object operations.

Object operations

After configuring AWS CLI, partners and users can interact with product artifacts using Object operations.

List product artifacts within a bucket

To list product artifacts, get the bucket name, and then run the following command:

aws --endpoint-url https://s3.us.cloud-object-storage.appdomain.cloud s3 ls s3://(bucket-name)

For example:

aws --endpoint-url https://s3.us.cloud-object-storage.appdomain.cloud s3 ls s3://rhmccp-fc84f481-1e43-4c5e-8d0a-6378ffad4fbb
PRE rhmccp-fc84f481-1e43-4c5e-8d0a-6378ffad4fbb_profile/
2021-07-18 05:33:49 1909 Border_Crossing_Entry_Data.csv
2021-07-18 05:33:56 248 metadata.json

To complete the process, upload metadata.json to cloud object storage.

Upload product artifacts to bucket

To upload product artifacts, get the bucket name, and then run the following command:

aws --endpoint-url https://s3.us.cloud-object-storage.appdomain.cloud s3 cp (file_path_from_your_drive_to_upload) s3://(bucket-name)

For example:

aws --endpoint-url https://s3.us.cloud-object-storage.appdomain.cloud s3 cp /Users/some_user/Downloads/sample_folder/profile.json s3://rhmccp-fc84f481-1e43-4c5e-8d0a-6378ffad4fbb
upload: Downloads/edition-sample_folder/profile.json to s3://rhmccp-fc84f481-1e43-4c5e-8d0a-6378ffad4fbb/profile.json

To complete the process, upload metadata.json to cloud object storage.

Upload multiple product artifacts from a local directory to a bucket

To upload multiple product artifacts, get the bucket name, and then run the following command:

aws --endpoint-url https://s3.us.cloud-object-storage.appdomain.cloud s3 sync (directory_path_from_your_drive_to_upload) s3://(bucket-name)

For example:

aws --endpoint-url https://s3.us.cloud-object-storage.appdomain.cloud s3 sync /Users/some_user/Downloads/rhmccp-4f36f901 s3://rhmccp-4f36f901-fa35-4e6f-824a-0d47849ebba3/
upload: Downloads/rhmccp-4f36f901/311_service_requests_2020.csv to s3://edition-100/311_service_requests_2020.csv
upload: Downloads/rhmccp-4f36f901/some_Dir/311_service_requests_2020.csv to s3://edition-100/some_Dir/311_service_requests_2020.csv
upload: Downloads/rhmccp-4f36f901/311_service_requests_2011.csv to s3://edition-100/311_service_requests_2011.csv
upload: Downloads/rhmccp-4f36f901/some_Dir/311_service_requests_2011.csv to s3://edition-100/some_Dir/311_service_requests_2011.csv

To complete the process, upload metadata.json to cloud object storage.

Delete a product artifact from a bucket

To delete a product artifact, get the bucket name, and then run the following command:

aws --endpoint-url https://s3.us.cloud-object-storage.appdomain.cloud s3 rm s3://(bucket-name)/(file_path_name_to_delete)

Download a product artifact from a bucket

To download a product artifact, get the bucket name, and then run the following command:

aws --endpoint-url https://s3.us.cloud-object-storage.appdomain.cloud s3 cp s3://(bucket-name)/(file_path_name_to_download) (file_path_from_your_drive_to_download)

Download all product artifacts from a bucket to a local directory

To download all product artifacts, get the bucket name, and then run the following command:

aws --endpoint-url https://s3.us.cloud-object-storage.appdomain.cloud s3 sync s3://(bucket-name) (directory_path_from_your_drive_to_download)

Additional information

For more information and commands for AWS CLI, refer the the following:


Use S3 compatible tools to upload and download product artifacts

Partners and users may want to use a stand-alone utility to interact with their storage. IBM Cloud Object Storage API supports the most common set of S3 API operations and many S3-compatible tools can also connect to Object Storage using HMAC credentials.

Some compatible tools include the following:


Mounting a bucket using s3fs

Applications that expect to read and write to a NFS-style filesystem can use s3fs, which mounts a bucket as a directory while preserving the native object format for files.

This allows you to interact with your cloud storage using familiar shell commands, like ls for listing or cp to copy files, as well as providing access to legacy applications that rely on reading and writing from local files.

Prerequisites

Procedure

  1. Install s3fs

    • OSX — Using Homebrew, run the following command:

      brew install autoconf
      brew install automake
      brew install libtool
      export PKG_CONFIG_PATH="/usr/local/opt/openssl/lib/pkgconfig"
    • Linux (Red Hat Enterprise Linux or CentOS) — To install, run the following command:

      yum install gcc libstdc++-devel gcc-c++ fuse fuse-devel curl-devel libxml2-devel mailcap git automake make
      yum install openssl-devel
      export PKG_CONFIG_PATH=/usr/local/lib/pkgconfig
  2. Clone the s3fs source repository on GitHub. To clone, run the following command:

    git clone https://github.com/s3fs-fuse/s3fs-fuse.git
  3. Build s3fs. To build, run the following command:

    cd s3fs-fuse
    ./autogen.sh
    ./configure --prefix=/usr --with-openssl
    make
    sudo make install
  4. Verify s3fs successfully installed. To verify, run the following command:

    s3fs --version
  5. Configure s3fs. To configure, store your credentials in a file containing either accessKey:secretAccessKey or :apiKey.

    For example:

    echo ACCESS_KEY_ID:SECRET_ACCESS_KEY > ${HOME}/.passwd-s3fs

    or

    echo :apiKey > ${HOME}/.passwd-s3fs
  6. Set limited access. To set access, run the following command:

    chmod 600 ${HOME}/.passwd-s3fs
  7. Mount the bucket using s3fs. To mount, run the following command:

    s3fs (bucket-name) (mountpoint) -o
    url=https://s3.us.cloud-object-storage.appdomain.cloud –o
    passwd_file=${HOME}/.passwd-s3fs

    For example:

    s3fs rhmccp-fc84f481-1e43-4c5e-8d0a-6378ffad4fbb
    /opt/rhmccp-fc84f481-1e43-4c5e-8d0a-6378ffad4fbb -o
    url=https://s3.us.cloud-object-storage.appdomain.cloud -o
    passwd_file=${HOME}/.passwd-s3fs

    Note: When the credentials file only has an API key (no HMAC credentials), you’ll need to add the ibm_iam_auth flag. To add the flag, run the following command:

    s3fs (bucket-name) (mountpoint) -o
    url=https://s3.us.cloud-object-storage.appdomain.cloud –o
    passwd_file=${HOME}/.passwd-s3fs -o ibm_iam_auth

Results

The bucket is mounted.

Next steps

After mounting, you can upload any files or directories and it will sync with the bucket. Then you can unmount the bucket. To unmount, run the following command:

s3fs unmount (mountpoint)

Additional information

For more information, refer to the following.


Get bucket name

To upload and download product artifacts, partners and users need the bucket name where the artifacts are stored.

For partners

The bucket name shows on product pricing setup, on the Add a dataset to the edition section.

For users

The bucket name shows on Workspace. To find, navigate to the product, click the Downloads tab, click OpenShift, click Step 3 — Mount to OpenShift. The bucket name shows on the command.


About metadata.json

Metadata.json contains information about product artifacts uploaded to cloud object storage, such as file path, version, and last update. To successfully complete uploading product artifacts, partners must upload metadata.json after uploading product artifacts.

Note: When uploading product artifacts to more than one bucket, partners must include one metadata.json file per bucket.

Metadata.json field descriptions

The following shows descriptions for metadata.json fields.

  • version— the date and time of the last upload
  • sequence— indicates the amount of times partners uploaded files
  • files— contains an array of file details
  • fileName— the complete path to the file, when the file has been uploaded to partner storage. Use the same file path that you used to upload the files.
  • name— the file name. Shows on Workspace
  • fileType— the file format. Shows on Workspace
  • version— the current file version. Shows on Workspace.
  • fileLength— the file size in bytes. A representation shows on Workspace.
  • lastUpdatedDate— the date of the most recent file update. Shows on Workspace.
  • url— the complete path to the file, when the file is hosted in a different location, outside of partner storage.

Sample metadata.json for downloadable files

The following shows a sample metadata.json. To enable product artifacts for users, ensure you upload the file to cloud object storage.

{
"data": {
"contents": [
{
"version": "2021-02-12T09:40:41.342Z",
"sequence": 4629,
"files": [
{
"fileName": "sampledata/sampledata_file.csv",
"name": "sampledata_file",
"fileType": "csv",
"version": "1",
"fileLength": 14666026,
"lastUpdatedDate": 1613008916000
},
{
"fileName": "sampledata/sampledata_file.json",
"name": "sampledata_file",
"fileType": "json",
"version": "1",
"fileLength": 14666026,
"lastUpdatedDate": 1613008919000
},
{
"url": "https://dax-cdn.cdn.appdomain.cloud/dax-noaa-weather-data-jfk-airport/1.1.4/noaa-weather-data-jfk-airport.tar.gz",
"name": "noaa-weather-data-jfk-airport",
"fileType": "tar",
"version": "1",
"fileLength": 14735883,
"lastUpdatedDate": 1613008920000
}
]
}
]
}
}

Example metadata.json upload file request

The following shows an example request to upload metadata.json directly to cloud object storage using cURL.

curl -X "PUT"
"https://s3.us.cloud-object-storage.appdomain.cloud/rhm-sand-edition-200321/metadata.json" \
-H "Authorization: Bearer (token)" \
--data-binary @'/Users/Downloads/metadata.json'

Note: On terminal, a successful upload returns to the command prompt with no error message showing.