Remote Cloud Computing Expert – AI Trainer ($55-$75 per hour)

by Mercor Inc in

Job role overview

  • Date posted

    May 7, 2026

  • Hiring location

    La Puente

Description

Role Overview
Mercor is partnering with leading AI labs on Project Atlas — an initiative to build realistic enterprise environments that frontier AI agents are trained and evaluated in. We're seeking experienced cloud-computing professionals from major hyperscalers and Fortune 500 enterprises running large cloud deployments (e.g., AWS, Microsoft Azure, Google Cloud, Oracle Cloud, Snowflake, Databricks, and Fortune 500 platform / infrastructure teams) to recreate the digital workspaces they run every day and design the tasks that genuinely challenge state-of-the-art AI. You'll bring your expertise in cloud architecture, site reliability, platform engineering, DevOps / DevSecOps, or cloud FinOps to build a high-fidelity environment that mirrors the tools, files, and cross-functional workflows of a modern cloud organization — and then author tasks grounded in the programs you actually run today.

Key Responsibilities

Build a realistic digital workspace centered on the Drive folders you use day-to-day — the architecture docs, runbooks, RFCs, incident post-mortems, capacity plans, cost reports, SRE review decks, and email threads that reflect how you actually organize your work — with some representation of the platforms that support it (e.g., HashiCorp Terraform, Datadog / Splunk, GitHub Actions, Okta)

Design multi-step tasks grounded in your real workflows that require navigating multiple apps, files, and stakeholders in a way that meaningfully challenges frontier AI agents

Collaborate with other cloud-computing experts in your field to design the environment, shape task scope, and review each other's scenarios for realism and rigor

Work asynchronously with research teams to refine task designs and evaluation criteria for cloud-computing agent benchmarks

Contribute to frontier AI research and benchmarking — the work you produce directly informs how leading labs train and evaluate the next generation of AI systems

Ideal Qualifications

3+ years of full-time experience at a major hyperscaler (AWS, Azure, GCP, Oracle Cloud), a cloud-data platform (Snowflake, Databricks), or a Fortune 500 platform / infrastructure team

Background in one or more areas such as:

Cloud architecture / solutions engineering (multi-account, multi-region, hybrid)

Site reliability engineering or production engineering

Platform / developer-experience engineering (IaC, internal developer platforms)

DevOps / DevSecOps, CI / CD, or container / Kubernetes operations

Cloud security, compliance (SOC 2, ISO 27001, FedRAMP), or cloud FinOps

Certifications a plus : AWS Solutions Architect / SysOps / DevOps, Azure Solutions Architect, GCP Professional Cloud Architect, CKA / CKAD

Day-to-day use of HashiCorp Terraform / Pulumi, Splunk / Datadog, GitHub Actions / CircleCI, and Okta / Microsoft Entra ID

Strong analytical thinking and writing — able to translate cloud-ops workflows into structured task specs

Compensation Note
This project is expected to begin on an effective hourly rate, but will transition to a model where experts are compensated based on throughput of quality work rather than a flat accruing hourly rate.

#J-18808-Ljbffr

work mode

On-site

Interested in this job?

23 days left to apply

Apply now

Call employer
Apply now
Send message
Cancel