Python Script

Build an Autonomous AI Agent Workflow

Welcome to the forefront of AI automation. This guide provides a Python script that demonstrates a powerful concept: Agentic AI. Inspired by frameworks like AutoGPT and LangChain, this script simulates a multi-agent autonomous workflow where AI "digital coworkers" collaborate to solve complex problems with minimal human intervention. You give them a high-level goal, and they figure out the rest.

This is the next evolution beyond simple chatbots. We're building an AI workforce that can plan, reason, and execute tasks, opening up a new frontier of possibilities for automation.

The Core Concepts Explained

Our script uses a simple but powerful multi-agent architecture:

How to Use This Script:

  1. Prerequisites: You need Python 3 and pip installed. You will also need an API key from an LLM provider like OpenAI.
  2. Save the Script: Click the "Copy Script" button below and save the code into a file named `agentic_workflow.py`.
  3. Install Dependencies: Open your terminal or command prompt and run the following command to install the necessary libraries:
    pip install openai python-dotenv
  4. Configure API Key: In the same folder as your script, create a new file named `.env`. Inside this file, add your API key like this (replace the placeholder with your actual key):
    OPENAI_API_KEY='sk-YourSecretAPIKeyGoesHere'
  5. Run the Workflow: Navigate to your folder in the terminal and run the script:
    python agentic_workflow.py
  6. Define Your Goal: The script will prompt you to enter a goal. Try something complex like: "Develop a marketing plan for a new AI-powered coffee mug."
  7. Observe the Agents at Work: Watch your terminal as the Planner creates a detailed plan and the Executor tackles each task in sequence, printing its "actions" and findings.

The Benefits: Why is Agentic AI a Game Changer?

This approach moves beyond simple question-and-answer. It unlocks:


import os
from openai import OpenAI
from dotenv import load_dotenv

# --- Configuration ---
# Load environment variables from .env file (for API key)
load_dotenv()
API_KEY = os.getenv("OPENAI_API_KEY")
if not API_KEY:
    raise ValueError("OpenAI API key not found. Please create a .env file with OPENAI_API_KEY='your_key'")

client = OpenAI(api_key=API_KEY)
MODEL = "gpt-4o" # Or "gpt-3.5-turbo" for a faster, cheaper option

def run_llm(prompt, system_message):
    """Generic function to call the OpenAI API."""
    try:
        response = client.chat.completions.create(
            model=MODEL,
            messages=[
                {"role": "system", "content": system_message},
                {"role": "user", "content": prompt},
            ],
            temperature=0.5,
        )
        return response.choices[0].message.content
    except Exception as e:
        print(f"An error occurred while calling the LLM: {e}")
        return None

def planner_agent(goal):
    """
    The Planner Agent: Decomposes a high-level goal into a step-by-step plan.
    """
    system_message = (
        "You are a world-class project planning AI. Your job is to take a complex user goal "
        "and break it down into a list of simple, executable subtasks. "
        "Each subtask must be a clear, single action. "
        "Return your response as a numbered list, with each task on a new line. Do not add any other text or explanation."
    )
    prompt = f"Create a step-by-step plan to achieve the following goal: '{goal}'"
    
    print("\n\033[94m[Planner Agent]\033[0m Thinking...")
    plan_str = run_llm(prompt, system_message)
    
    if plan_str:
        # Split the response into a list of tasks, removing any empty lines
        plan = [task.strip() for task in plan_str.split('\n') if task.strip()]
        # Remove the number and period from the start of each task (e.g., "1. ")
        plan = [task.split('. ', 1)[1] if '. ' in task else task for task in plan]
        print("\033[94m[Planner Agent]\033[0m I have created a plan:")
        for i, task in enumerate(plan, 1):
            print(f"  {i}. {task}")
        return plan
    return []

def executor_agent(task):
    """
    The Executor Agent: Takes a single task and 'executes' it by providing a summary of actions.
    In a real system, this agent would use tools (e.g., web search, file access).
    """
    system_message = (
        "You are a diligent AI executor. Your job is to take a single, specific task "
        "and describe the actions you would take to complete it and what the expected outcome is. "
        "Be concise and clear. Frame your response as if you are performing the action."
    )
    prompt = f"Execute this task: '{task}'"
    
    print(f"\n\033[92m[Executor Agent]\033[0m Now executing: '{task}'")
    execution_result = run_llm(prompt, system_message)
    
    if execution_result:
        print(f"\033[92m[Executor Agent]\033[0m Result: {execution_result}")
    return execution_result

def main():
    """The main orchestrator loop."""
    print("\n--- Autonomous AI Agent Workflow ---")
    goal = input("Please enter your high-level goal: ")

    # 1. Planner creates the plan
    task_list = planner_agent(goal)

    if not task_list:
        print("\n\033[91m[Orchestrator]\033[0m The planner failed to create a plan. Exiting.")
        return

    # 2. Executor executes each task in the plan
    print("\n\033[93m[Orchestrator]\033[0m Starting execution of the plan...")
    for task in task_list:
        executor_agent(task)
        # In a real system, you might have a feedback loop or a critic agent here.

    print("\n\033[93m[Orchestrator]\033[0m All tasks completed. The goal has been achieved.")

if __name__ == "__main__":
    main()