Automate Any Website with Selenium
In 2025, the ability to automate interactions with the web is no longer a niche skill—it's a superpower. From quality assurance engineers building robust end-to-end testing pipelines to data scientists scraping information, browser automation is the engine behind the modern digital workforce. This guide provides the most popular and powerful script for this task: a Python script using Selenium to automate website login and data extraction.
We'll walk through a complete, real-world example that you can run on your own machine. This is your definitive starting point for mastering web automation.
The Power of Selenium: Your Browser's Robot Chauffeur
Selenium is an open-source framework that allows you to write code that controls a web browser, just as a human would. You can instruct it to navigate to URLs, find elements on a page, type text into forms, click buttons, and extract any visible data. It is the gold standard for UI testing and a cornerstone of modern Robotic Process Automation (RPA).
How to Use This Script: A Step-by-Step Guide
- Prerequisites: You need Python 3, pip (Python's package installer), and the Google Chrome browser installed on your computer.
- Save the Script: Click the "Copy Script" button below and save the code into a file named `selenium_automation.py`.
-
Install Dependencies: This is the most crucial step. Open your terminal or command prompt and run this command to install Selenium and the brilliant `webdriver-manager` library, which automatically handles the browser driver for you:
pip install selenium webdriver-manager -
Run the Automation: Navigate to your folder in the terminal and simply run the script:
python selenium_automation.py - Watch the Magic: A new Chrome window will open automatically. The script will navigate to a demo e-commerce site, type in a username and password, click the login button, and then scrape the titles and prices of products from the inventory page. The results will be printed in your terminal.
The Benefits: Beyond a Simple Script
Mastering this workflow unlocks a vast array of possibilities:
- Automated Testing: You can expand this script into a full test automation framework. It can test user registration, checkout processes, and other critical user journeys, ensuring your website works perfectly after every code change. This is essential for modern CI/CD pipelines.
- Data Scraping & Monitoring: Automatically track competitor prices, gather market research data, or monitor stock market websites for changes.
- Repetitive Task Automation: Automate filling out tedious web forms, generating reports from web dashboards, or any other repetitive browser-based task, freeing up valuable human time.
import time
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.chrome.service import Service as ChromeService
from webdriver_manager.chrome import ChromeDriverManager
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
def automate_sauce_demo():
"""
This script automates logging into the Sauce Labs demo e-commerce site
and scrapes product information from the main inventory page.
"""
print("--- Starting Selenium Web Automation Script ---")
# --- 1. Setup WebDriver ---
# webdriver-manager automatically downloads and manages the correct driver for your Chrome version.
# This is a modern best practice that eliminates manual driver management.
print("[SETUP] Initializing Chrome WebDriver...")
try:
service = ChromeService(ChromeDriverManager().install())
# Optional: Run in headless mode (no UI window) for server environments
# options = webdriver.ChromeOptions()
# options.add_argument("--headless")
# driver = webdriver.Chrome(service=service, options=options)
driver = webdriver.Chrome(service=service)
except Exception as e:
print(f"Error initializing WebDriver: {e}")
return
# --- 2. Login Process ---
try:
print("[ACTION] Navigating to login page...")
driver.get("https://www.saucedemo.com/")
print("[ACTION] Entering credentials...")
# Find the username field by its ID and type in the standard username
driver.find_element(By.ID, "user-name").send_keys("standard_user")
# Find the password field by its ID and type in the password
driver.find_element(By.ID, "password").send_keys("secret_sauce")
print("[ACTION] Clicking login button...")
driver.find_element(By.ID, "login-button").click()
except Exception as e:
print(f"An error occurred during the login process: {e}")
driver.quit()
return
# --- 3. Data Scraping Process ---
try:
print("[ACTION] Waiting for inventory page to load...")
# Use WebDriverWait to ensure the page has loaded before we try to scrape.
# This is crucial for handling dynamic, modern web applications.
# We wait up to 10 seconds for an element with the class 'inventory_list' to be visible.
WebDriverWait(driver, 10).until(
EC.visibility_of_element_located((By.CLASS_NAME, "inventory_list"))
)
print("[SUCCESS] Inventory page loaded.")
print("\n--- Scraping Product Data ---")
# Find all elements that represent a product item
inventory_items = driver.find_elements(By.CLASS_NAME, "inventory_item")
scraped_data = []
for item in inventory_items:
# Within each item, find the name and price by their class names
name = item.find_element(By.CLASS_NAME, "inventory_item_name").text
price = item.find_element(By.CLASS_NAME, "inventory_item_price").text
scraped_data.append({"name": name, "price": price})
print(f" - Found: {name} | Price: {price}")
print("\n[SUCCESS] Scraping complete.")
return scraped_data
except Exception as e:
print(f"An error occurred during the scraping process: {e}")
return None
finally:
# --- 4. Cleanup ---
# Always close the browser window to free up system resources.
print("\n[CLEANUP] Closing browser window...")
driver.quit()
print("--- Script Finished ---")
if __name__ == "__main__":
scraped_products = automate_sauce_demo()
if scraped_products:
print(f"\nSuccessfully scraped {len(scraped_products)} products.")