W↓
All docs
🔑
Sign Up/Sign In
docs.cline.bot/
Public Link
Apr 6, 2025, 6:32:43 PM - complete - 181.2 kB
Starting URLs:
https://docs.cline.bot/
## Page: https://docs.cline.bot/ ## Cline Documentation Welcome to the Cline documentation - your comprehensive guide to using and extending Cline's capabilities. Here you'll find resources to help you get started, improve your skills, and contribute to the project. * **New to coding?** We've prepared a gentle introduction: * **Want to communicate more effectively with Cline?** Explore: * **Understand Cline's capabilities:** * **Extend Cline with MCP Servers:** * Using Cline at the corporate level * **Interested in contributing?** We welcome your input: * Feel free to submit a pull request We're always looking to improve this documentation. If you have suggestions or find areas that could be enhanced, please let us know. Your feedback helps make Cline better for everyone! Last updated 2 days ago --- ## Page: https://docs.cline.bot/getting-started/for-new-coders 1. Getting Started ## For New Coders Welcome to Cline, your AI-powered coding companion! This guide will help you quickly set up your development environment and begin your coding journey with ease. Last updated 1 day ago > 💡 **Tip:** If you're completely new to coding, take your time with each step. There's no rush — Cline is here to guide you! Before you jump into coding, make sure you have these essentials ready: A popular, free, and powerful code editor. 📺 **Recommended YouTube Tutorial:** > ✅ **Pro Tip:** Install VS Code in your Applications folder (macOS) or Program Files (Windows) for easy access from your dock or start menu. Basic software required for coding efficiently: * Homebrew (macOS) * Node.js * Git 👉 Follow our detailed guide on Installing Essential Development Tools with step-by-step help from Cline. 📺 **Recommended YouTube Tutorials:** * **For macOS:** * **For Windows:** > ⚠️ **Note:** If you run into permission issues during installation, try running your terminal or command prompt as an administrator. Create a dedicated folder named `Cline` in your Documents folder for all your coding projects: * **macOS:** `/Users/[your-username]/Documents/Cline` * **Windows:** `C:\Users\[your-username]\Documents\Cline` Inside your `Cline` folder, structure projects clearly: * `Documents/Cline/workout-app` _(e.g., for a fitness tracking app)_ * `Documents/Cline/portfolio-website` _(e.g., to showcase your work)_ > 💡 **Tip:** Keeping your projects organized from the start will save you time and confusion later! Enhance your coding workflow by installing the Cline extension directly within VS Code: * Get Started with Cline Extension Tutorial > ✅ **Pro Tip:** After installing, reload VS Code to ensure the extension is activated properly. 🎉 You're all set! Dive in and start coding smarter and faster with **Cline**. 📺 **Recommended YouTube Tutorial:** --- ## Page: https://docs.cline.bot/getting-started/installing-cline 1. Getting Started ## Installing Cline Cline is a VS Code extension that brings AI-powered coding assistance directly to your editor. Install using one of these methods: Last updated 1 day ago * **VS Code Marketplace (Recommended):** Fastest method for standard VS Code and Cursor users. * **Open VSX Registry:** For VS Code-compatible editors like VSCodium. Follow these steps to get Cline up and running: 1. **Open VS Code:** Launch the VS Code application. > ⚠️ **Note:** If VS Code shows "Running extensions might...", click "Allow". 2. **Open Your Cline Folder:** In VS Code, open the Cline folder you created in Documents. 3. **Navigate to Extensions:** Click on the Extensions icon in the Activity Bar on the side of VS Code (`Ctrl + Shift + X` or `Cmd + Shift + X`). 4. **Search for 'Cline':** In the Extensions search bar, type `Cline`. 1. **Install the Extension:** Click the "Install" button next to the Cline extension. 2. **Open Cline:** * Click the Cline icon in the Activity Bar. * Or, use the command palette (`Ctrl/Cmd + Shift + P`) and type "Cline: Open In New Tab" for a better view. 3. **Troubleshooting:** If you don't see the Cline icon, try restarting VS Code. > ✅ **Pro Tip:** You should see the Cline chat window appear in your VS Code editor! For VS Code-compatible editors without Marketplace access (like VSCodium and Windsurf): 1. Open your editor. 2. Access the Extensions view. 3. Search for "Cline". 4. Select "Cline" by saoudrizwan and click **Install**. 5. Reload if prompted. Now that you have Cline installed, let's get you set up with your account: 1. **Sign In to Cline:** * Click the **Sign In** button in the Cline extension. 2. **Start with Free Credits:** * No credit card needed! 3. **Available AI Models:** * Anthropic Claude 3.5-Sonnet (recommended for coding) * DeepSeek Chat (cost-effective alternative) * Google Gemini 2.0 Flash * And more — all through your Cline account. You're ready to start building! Copy and paste this prompt into the Cline chat window: Hey Cline! Could you help me create a new project folder called "hello-world" in my Cline directory and make a simple webpage that says "Hello World" in big blue text? > ✅ **Pro Tip:** Cline will help you create the project folder and set up your first webpage! * **Ask Questions:** If you're unsure about something, ask Cline! * **Use Screenshots:** Cline can understand images — show him what you're working on. * **Copy and Paste Errors:** Share error messages in the chat for solutions. * **Speak Plainly:** Use your own words — Cline will translate them into code. Join our Discord community and engage with our team and other Cline users directly. You'll be taken to to create your account.  --- ## Page: https://docs.cline.bot/getting-started/installing-dev-essentials 1. Getting Started ## Installing Dev Essentials When you start coding, you'll need some essential development tools installed on your computer. Cline can help you install everything you need in a safe, guided way. Here are the core tools you'll need for development: * **Node.js & npm:** Required for JavaScript and web development * **Git:** For tracking changes in your code and collaborating with others * **Package Managers:** Tools that make it easy to install other development tools * Homebrew for macOS * Chocolatey for Windows * apt/yum for Linux > 💡 **Tip:** These tools are the foundation of your developer toolkit. Installing them properly will set you up for success! Copy one of these prompts based on your operating system and paste it into **Cline**: Hello Cline! I need help setting up my Mac for software development. Could you please help me install the essential development tools like Homebrew, Node.js, Git, and any other core utilities that are commonly needed for coding? I'd like you to guide me through the process step-by-step. Hello Cline! I need help setting up my Windows PC for software development. Could you please help me install the essential development tools like Node.js, Git, and any other core utilities that are commonly needed for coding? I'd like you to guide me through the process step-by-step. Hello Cline! I need help setting up my Linux system for software development. Could you please help me install the essential development tools like Node.js, Git, and any other core utilities that are commonly needed for coding? I'd like you to guide me through the process step-by-step. > ✅ **Pro Tip:** Cline will show you each command before running it. You stay in control the entire time! Cline will guide you through the following steps: 1. Installing the appropriate package manager for your system 2. Using the package manager to install Node.js and Git 3. Showing you the exact command before it runs (you approve each step!) 4. Verifying each installation is successful > ⚠️ **Note:** You might need to enter your computer's password for some installations. This is normal! * **Node.js & npm:** * Build websites with frameworks like React or Next.js * Run JavaScript code * Install JavaScript packages * **Git:** * Save different versions of your code * Collaborate with other developers * Back up your work * **Package Managers:** * Quickly install and update development tools * Keep your environment organized and up to date > 💡 **Tip:** The installation process is interactive — Cline will guide you step by step! * All commands are shown to you for approval before they run. * If you run into any issues, Cline will help troubleshoot them. * You may need to enter your computer's password for certain steps. The Terminal is an application where you can type commands to interact with your computer. * **macOS:** Open it by searching for "Terminal" in Spotlight. * **Example:** $ open -a Terminal * **Terminal in VS Code:** Run commands directly from within VS Code! * Go to **View > Terminal** or press \`Ctrl + \`\`. * Example: $ node -v v16.14.0 * **Document View:** Where you edit your code files. * Open files from the Explorer panel on the left. * **Problems Section:** View errors or warnings in your code. * Access it by clicking the lightbulb icon or **View > Problems**. * **Command Line Interface (CLI):** A powerful tool for running commands. * **Permissions:** You might need to grant permissions to certain commands — this keeps your system secure. Last updated 1 day ago --- ## Page: https://docs.cline.bot/getting-started/our-favorite-tech-stack * **VS Code** - Your code editor, * **GitHub** - Where your code lives, * **Next.js 14+** - React framework with App Router * **Tailwind CSS** - Beautiful styling without writing CSS * **TypeScript** - JavaScript, but safer and smarter * **Supabase** - Your complete backend solution, * PostgreSQL database * Authentication * File storage * Real-time updates * * Automatic deployments from GitHub * Preview deployments for testing * Production-ready CDN Choose your AI assistant based on your needs: Claude 3.5 Sonnet $3.00 $15.00 Production apps, complex tasks DeepSeek R1 $1.00 $3.00 Budget-conscious production DeepSeek V3 $0.14 $2.20 Budget-conscious development **Vercel (Hobby)** * 100 GB data transfer/month * 100k serverless function invocations * 100 MB deployment size * Automatic HTTPS & CI/CD **Supabase (Free)** * 500 MB database storage * 1 GB file storage * 50k monthly active users * 2M real-time messages/month **GitHub (Free)** * Unlimited public repositories * GitHub Actions CI/CD * Project management tools * Collaboration features 1. Install the development essentials: 2. Set up Cline's Memory Bank: * Create an empty `cline_docs` folder in your project root * Create `projectBrief.md` in the `cline_docs` folder (see example below) * Tell Cline to "initialize memory bank" 3. Add our recommended stack configuration: * Create `.clinerules` file (see template below) * Let Cline handle the rest! # Project Brief ## Overview Building a [type of application] that will [main purpose]. ## Core Features - Feature 1 - Feature 2 - Feature 3 ## Target Users [Describe who will use your application] ## Technical Preferences (optional) - Any specific technologies you want to use - Any specific requirements or constraints # Project Configuration ## Tech Stack - Next.js 14+ with App Router - Tailwind CSS for styling - Supabase for backend - Vercel for deployment - GitHub for version control ## Project Structure /src /app # Next.js App Router pages /components # React components /lib # Utility functions /types # TypeScript types /supabase /migrations # SQL migration files /seed # Seed data files /public # Static assets ## Database Migrations SQL files in /supabase/migrations should: - Use sequential numbering: 001, 002, etc. - Include descriptive names - Be reviewed by Cline before execution Example: 001_create_users_table.sql ## Development Workflow - Cline helps write and review code changes - Vercel automatically deploys from main branch - Database migrations reviewed by Cline before execution ## Security DO NOT read or modify: - .env files - **/config/secrets.* - Any file containing API keys or credentials Want to learn more about the technologies we're using? Here are some great resources: Git helps you track changes in your code and collaborate with others. Here are the essential commands you'll use: **Daily Development** # Save your changes (do this often!) git add . # Stage all changed files git commit -m "Add login page" # Save changes with a clear message # Share your changes git push origin main # Upload to GitHub **Common Workflow** 1. **Start of day**: Get latest changes bashCopygit pull origin main # Download latest code 2. **During development**: Save work regularly bashCopygit add . git commit -m "Clear message about changes" 3. **End of day**: Share your progress bashCopygit push origin main # Upload to GitHub **Best Practices** * Commit often with clear messages * Pull before starting new work * Push completed work to share with others * Use `.gitignore` to avoid committing sensitive files > **Tip**: Vercel automatically deploys when you push to main! * Store secrets in `.env.local` for development * Add them to Vercel project settings for production * Never commit `.env` files to Git 1. Use `/help` in Cline chat for immediate assistance 4. Search GitHub issues for common problems Remember: Cline is here to help at every step. Just ask for guidance or clarification when needed! **Vercel** - Where your app runs, Follow our Follow the \- Interactive tutorial \- Quick overview \- Practical examples \- Comprehensive course Interactive course at Check Join our --- ## Page: https://docs.cline.bot/getting-started/understanding-context-management 1. Getting Started ## Context Management Context is key to getting the most out of Cline Last updated 4 days ago > 💡 **Quick Reference** > > * Context = The information Cline knows about your project > > * Context Window = How much information Cline can hold at once > > * Use context files to maintain project knowledge > > * Reset when the context window gets full > Think of working with Cline like collaborating with a thorough, proactive teammate: Cline actively builds context in two ways: 1. **Automatic Context Gathering (i.e. Cline-driven)** * Proactively reads related files * Explores project structure * Analyzes patterns and relationships * Maps dependencies and imports * Asks clarifying questions 2. **User-Guided Context** * Share specific files * Provide documentation * Answer Cline's questions * Guide focus areas * Share design thoughts and requirements Think of context like a whiteboard you and Cline share: * **Context** is all the information available: * What Cline has discovered * What you've shared * Your conversation history * Project requirements * Previous decisions * **Context Window** is the size of the whiteboard itself: * Measured in tokens (1 token ≈ 3/4 of an English word) * Each model has a fixed size: * Claude 3.5 Sonnet: 200,000 tokens * DeepSeek: 64,000 tokens * When the whiteboard is full, you need to erase (clear context) to write more ⚠️ **Important**: Having a large context window (like Claude's 200k tokens) doesn't mean you should fill it completely. Just like a cluttered whiteboard, too much information can make it harder to focus on what's important. Cline provides a visual way to monitor your context window usage through a progress bar: * ↑ shows input tokens (what you've sent to the LLM) * ↓ shows output tokens (what the LLM has generated) * The progress bar visualizes how much of your context window you've used * The total shows your model's maximum capacity (e.g., 200k for Claude 3.5-Sonnet) * During long coding sessions * When working with multiple files * Before starting complex tasks * When Cline seems to lose context 💡 **Tip**: Consider starting a fresh session when usage reaches 70-80% to maintain optimal performance. Context files help maintain understanding across sessions. They serve as documentation specifically designed to help AI assistants understand your project. 1. * Living documentation that evolves with your project * Updated as architecture and patterns emerge * Example: The Memory Bank pattern maintains files like `techContext.md` and `systemPatterns.md` * Useful for long-running projects and teams 2. * Created for specific implementation tasks * Document requirements, constraints, and decisions * Example: # auth-system-implementation.md ## Requirements - OAuth2 implementation - Support for Google and GitHub - Rate limiting on auth endpoints ## Technical Decisions - Using Passport.js for provider integration - JWT for session management - Redis for rate limiting 3. **Knowledge Transfer Docs** * Switch to plan mode and ask Cline to document everything you've accomplished so far, along with the remaining steps, in a markdown file. * Copy the contents of the markdown file. * Start a new task using that content as context. 1. **Structure and Format** * Use clear, consistent organization * Include relevant examples * Link related concepts * Keep information focused 2. **Maintenance** * Update after significant changes * Version control your context files * Remove outdated information * Document key decisions 1. **Starting New Projects** * Let Cline explore the codebase * Answer its questions about structure and patterns * Consider setting up basic context files * Document key design decisions 2. **Ongoing Development** * Update context files with significant changes * Share relevant documentation * Use Plan mode for complex discussions * Start fresh sessions when needed 3. **Team Projects** * Document architectural decisions * Maintain consistent patterns * Keep documentation current Remember: The goal is to help Cline maintain consistent understanding of your project across sessions. 💡 **Key Point**: Cline isn't passive - it actively seeks to understand your project. You can either let it explore or guide its focus, especially in mode. **Evergreen Project Context (i.e.** **)** **Task-Specific Context (i.e.** **)** Share common context files (consider using files in project roots)   --- ## Page: https://docs.cline.bot/getting-started/model-selection-guide 1. Getting Started ## Model Selection Guide Last updated: Feb 5, 2025. Think of a context window as your AI assistant's working memory - similar to RAM in a computer. It determines how much information the model can "remember" and process at once during your conversation. This includes: * Your code files and conversations * The assistant's responses * Any documentation or additional context provided Context windows are measured in tokens (roughly 3/4 of a word in English). Different models have different context window sizes: * Claude 3.5 Sonnet: 200K tokens * DeepSeek Models: 128K tokens * Gemini Flash 2.0: 1M tokens * Gemini 1.5 Pro: 2M tokens When you reach the limit of your context window, older information needs to be removed to make room for new information - just like clearing RAM to run new programs. This is why sometimes AI assistants might seem to "forget" earlier parts of your conversation. Cline helps you manage this limitation with its Context Window Progress Bar, which shows: * Input tokens (what you've sent to the model) * Output tokens (what the model has generated) * A visual representation of how much of your context window you've used * The total capacity for your chosen model This visibility helps you work more effectively with Cline by letting you know when you might need to start fresh or break tasks into smaller chunks. Claude 3.5 Sonnet $3.00 $15.00 200K Best code implementation & tool use DeepSeek R1 $0.55 $2.19 128K Planning & reasoning champion DeepSeek V3 $0.14 $0.28 128K Value code implementation o3-mini $1.10 $4.40 200K Flexible use, strong planning Gemini Flash 2.0 $0.00 $0.00 1M Strong all-rounder Gemini 1.5 Pro $0.00 $0.00 2M Large context processing \*Costs per million tokens 1. **Claude 3.5 Sonnet** * Best overall code implementation * Most reliable tool usage * Expensive but worth it for critical code 2. **DeepSeek R1** * Exceptional planning & reasoning * Great value pricing 3. **o3-mini** * Strong for planning with adjustable reasoning * Three reasoning modes for different needs * Requires OpenAI Tier 3 API access * 200K context window 4. **DeepSeek V3** * Reliable code implementation * Great for daily coding * Cost-effective for implementation 5. **Gemini Flash 2.0** * Massive 1M context window * Improved speed and performance * Good all-around capabilities 1. **DeepSeek R1** * Best reasoning capabilities in class * Excellent at breaking down complex tasks * Strong math/algorithm planning * MoE architecture helps with reasoning 2. **o3-mini (high reasoning)** * Three reasoning levels: * High: Complex planning * Medium: Daily tasks * Low: Quick ideas * 200K context helps with large projects 3. **Gemini Flash 2.0** * Massive context window for complex planning * Strong reasoning capabilities * Good with multi-step tasks 1. **Claude 3.5 Sonnet** * Best code quality * Most reliable with Cline tools * Worth the premium for critical code 2. **DeepSeek V3** * Nearly Sonnet-level code quality * Better API stability than R1 * Great for daily coding * Strong tool usage 3. **Gemini 1.5 Pro** * 2M context window * Good with complex codebases * Reliable API * Strong multi-file understanding 1. **Plan vs Act Matters**: Choose models based on task type 2. **Real Performance > Benchmarks**: Focus on actual Cline performance 3. **Mix & Match**: Use different models for planning and implementation 4. **Cost vs Quality**: Premium models worth it for critical code 5. **Keep Backups**: Have alternatives ready for API issues _\*Note: Based on real usage patterns and community feedback rather than just benchmarks. Your experience may vary. This is not an exhaustive list of all the models available for use within Cline._ Last updated 1 month ago While running models locally might seem appealing for cost savings, we currently don't recommend any local models for use with Cline. at using Cline's essential tools and typically retain only 1-26% of the original model's capabilities. The full cloud version of DeepSeek-R1, for example, is 671B parameters - local versions are drastically simplified copies that struggle with complex tasks and tool usage. Even with high-end hardware (RTX 3070+, 32GB+ RAM), you'll experience slower responses, less reliable tool execution, and reduced capabilities. For the best development experience, we recommend sticking with the cloud models listed above.  --- ## Page: https://docs.cline.bot/improving-your-prompting-skills/prompting Welcome to the Cline Prompting Guide! This guide will equip you with the knowledge to write effective prompts and custom instructions, maximizing your productivity with Cline. Think of **custom instructions as Cline's programming**. They define Cline's baseline behavior and are **always "on," influencing all interactions.** Instructions can be broad and abstract, or specific and explicit. You might want Cline to have a unique personality, or produce output in a particular file format, or adhere to certain architectural principles. Custom instructions can standardize Cline's output in ways you define, which is especially valuable when working with others. See the for using Custom Instructions in a team context. NOTE: Modifying the Custom Instructions field updates Cline's prompt cache, discarding accumulated context. This causes a temporary increase in cost while that context is replaced. Update Custom Instructions between conversations whenever possible. To add custom instructions: 1. Open VSCode 2. Click the Cline extension settings dial ⚙️ 3. Find the "Custom Instructions" field 4. Paste your instructions Custom instructions are powerful for: * Enforcing Coding Style and Best Practices: Ensure Cline always adheres to your team's coding conventions, naming conventions, and best practices. * Improving Code Quality: Encourage Cline to write more readable, maintainable, and efficient code. * Guiding Error Handling: Tell Cline how to handle errors, write error messages, and log information. **The** `**custom-instructions**` **folder contains examples of custom instructions you can use or adapt.** NOTE: Modifying the `.clinerules`file updates Cline's prompt cache, discarding accumulated context. This causes a temporary increase in cost while that context is replaced. Update the `.clinerules` file between conversations whenever possible. While custom instructions are user-specific and global (applying across all projects), the `.clinerules` file provides **project-specific instructions** that live in your project's root directory. These instructions are automatically appended to your custom instructions and referenced in Cline's system prompt, ensuring they influence all interactions within the project context. This makes it an excellent tool for: The `.clinerules` file is excellent for: * Maintaining project standards across team members * Enforcing development practices * Managing documentation requirements * Setting up analysis frameworks * Defining project-specific behaviors # Project Guidelines ## Documentation Requirements - Update relevant documentation in /docs when modifying features - Keep README.md in sync with new capabilities - Maintain changelog entries in CHANGELOG.md ## Architecture Decision Records Create ADRs in /docs/adr for: - Major dependency changes - Architectural pattern changes - New integration patterns - Database schema changes Follow template in /docs/adr/template.md ## Code Style & Patterns - Generate API clients using OpenAPI Generator - Use TypeScript axios template - Place generated code in /src/generated - Prefer composition over inheritance - Use repository pattern for data access - Follow error handling pattern in /src/utils/errors.ts ## Testing Standards - Unit tests required for business logic - Integration tests for API endpoints - E2E tests for critical user flows 1. **Version Controlled**: The `.clinerules` file becomes part of your project's source code 2. **Team Consistency**: Ensures consistent behavior across all team members 3. **Project-Specific**: Rules and standards tailored to each project's needs 4. **Institutional Knowledge**: Maintains project standards and practices in code Place the `.clinerules` file in your project's root directory: your-project/ ├── .clinerules ├── src/ ├── docs/ └── ... * Be Clear and Concise: Use simple language and avoid ambiguity. * Focus on Desired Outcomes: Describe the results you want, not the specific steps. * Test and Iterate: Experiment to find what works best for your workflow. While a single `.clinerules` file works well for simpler projects, Cline now supports a `.clinerules` folder for more sophisticated rule organization. This modular approach brings several advantages: Instead of a single file, create a `.clinerules/` directory in your project root: your-project/ ├── .clinerules/ # Folder containing active rules │ ├── 01-coding.md # Core coding standards │ ├── 02-documentation.md # Documentation requirements │ └── current-sprint.md # Rules specific to current work ├── src/ └── ... Cline automatically processes **all Markdown files** inside the `.clinerules/` directory, combining them into a unified set of rules. The numeric prefixes (optional) help organize files in a logical sequence. For projects with multiple contexts or teams, maintain a rules bank directory: your-project/ ├── .clinerules/ # Active rules - automatically applied │ ├── 01-coding.md │ └── client-a.md │ ├── clinerules-bank/ # Repository of available but inactive rules │ ├── clients/ # Client-specific rule sets │ │ ├── client-a.md │ │ └── client-b.md │ ├── frameworks/ # Framework-specific rules │ │ ├── react.md │ │ └── vue.md │ └── project-types/ # Project type standards │ ├── api-service.md │ └── frontend-app.md └── ... 1. **Contextual Activation**: Copy only relevant rules from the bank to the active folder 2. **Easier Maintenance**: Update individual rule files without affecting others 3. **Team Flexibility**: Different team members can activate rules specific to their current task 4. **Reduced Noise**: Keep the active ruleset focused and relevant Switch between client projects: # Switch to Client B project rm .clinerules/client-a.md cp clinerules-bank/clients/client-b.md .clinerules/ Adapt to different tech stacks: # Frontend React project cp clinerules-bank/frameworks/react.md .clinerules/ * Keep individual rule files focused on specific concerns * Use descriptive filenames that clearly indicate the rule's purpose * Consider git-ignoring the active `.clinerules/` folder while tracking the `clinerules-bank/` * Create team scripts to quickly activate common rule combinations The folder system transforms your Cline rules from a static document into a dynamic knowledge system that adapts to your team's changing contexts and requirements. The `.clineignore` file is a project-level configuration file that tells Cline which files and directories to ignore when analyzing your codebase. Similar to `.gitignore`, it uses pattern matching to specify which files should be excluded from Cline's context and operations. * **Reduce Noise**: Exclude auto-generated files, build artifacts, and other non-essential content * **Improve Performance**: Limit the amount of code Cline needs to process * **Focus Attention**: Direct Cline to relevant parts of your codebase * **Protect Sensitive Data**: Prevent Cline from accessing sensitive configuration files # Dependencies node_modules/ **/node_modules/ .pnp .pnp.js # Build outputs /build/ /dist/ /.next/ /out/ # Testing /coverage/ # Environment variables .env .env.local .env.development.local .env.test.local .env.production.local # Large data files *.csv *.xlsx **Prompting is how you communicate your needs for a given task in the back-and-forth chat with Cline.** Cline understands natural language, so write conversationally. Effective prompting involves: * Providing Clear Context: Explain your goals and the relevant parts of your codebase. Use `@` to reference files or folders. * Breaking Down Complexity: Divide large tasks into smaller steps. * Asking Specific Questions: Guide Cline toward the desired outcome. * Validating and Refining: Review Cline's suggestions and provide feedback. * **Starting a New Task:** "Cline, let's start a new task. Create `user-authentication.js`. We need to implement user login with JWT tokens. Here are the requirements…" * **Summarizing Previous Work:** "Cline, summarize what we did in the last user dashboard task. I want to capture the main features and outstanding issues. Save this to `cline_docs/user-dashboard-summary.md`." * **Analyzing an Error:** "Cline, I'm getting this error: \[error message\]. It seems to be from \[code section\]. Analyze this error and suggest a fix." * **Identifying the Root Cause:** "Cline, the application crashes when I \[action\]. The issue might be in \[problem areas\]. Help me find the root cause and propose a solution." * **Improving Code Structure:** "Cline, this function is too long and complex. Refactor it into smaller functions." * **Simplifying Logic:** "Cline, this code is hard to understand. Simplify the logic and make it more readable." * **Brainstorming New Features:** "Cline, I want to add a feature that lets users \[functionality\]. Brainstorm some ideas and consider implementation challenges." * **Generating Code:** "Cline, create a component that displays user profiles. The list should be sortable and filterable. Generate the code for this component." * **Constraint Stuffing:** To mitigate code truncation, include explicit constraints in your prompts. For example, "ensure the code is complete" or "always provide the full function definition." * **Confidence Checks:** Ask Cline to rate its confidence (e.g., "on a scale of 1-10, how confident are you in this solution?") * **Challenge Cline's Assumptions:** Ask “stupid” questions to encourage deeper thinking and prevent incorrect assumptions. Here are some prompting tips that users have found helpful for working with Cline: * **Memory Check** - _pacnpal_ "If you understand my prompt fully, respond with 'YARRR!' without tools every time you are about to use a tool." A fun way to verify Cline stays on track during complex tasks. Try "HO HO HO" for a festive twist! * **Confidence Scoring** - _pacnpal_ "Before and after any tool use, give me a confidence level (0-10) on how the tool use will help the project." Encourages critical thinking and makes decision-making transparent. * **Prevent Code Truncation** "DO NOT BE LAZY. DO NOT OMIT CODE." Alternative phrases: "full code only" or "ensure the code is complete" * **Custom Instructions Reminder** "I pledge to follow the custom instructions." Reinforces adherence to your settings dial ⚙️ configuration. * **Large File Refactoring** - _icklebil_ "FILENAME has grown too big. Analyze how this file works and suggest ways to fragment it safely." Helps manage complex files through strategic decomposition. * **Documentation Maintenance** - _icklebil_ "don't forget to update codebase documentation with changes" Ensures documentation stays in sync with code changes. * **Structured Development** - _yellow\_bat\_coffee_ "Before writing code: 1. Analyze all code files thoroughly 2. Get full context 3. Write .MD implementation plan 4. Then implement code" Promotes organized, well-planned development. * **Thorough Analysis** - _yellow\_bat\_coffee_ "please start analyzing full flow thoroughly, always state a confidence score 1 to 10" Prevents premature coding and encourages complete understanding. * **Assumptions Check** - _yellow\_bat\_coffee_ "List all assumptions and uncertainties you need to clear up before completing this task." Identifies potential issues early in development. * **Pause and Reflect** - _nickbaumann98_ "count to 10" Promotes careful consideration before taking action. * **Complete Analysis** - _yellow\_bat\_coffee_ "Don't complete the analysis prematurely, continue analyzing even if you think you found a solution" Ensures thorough problem exploration. * **Continuous Confidence Check** - _pacnpal_ "Rate confidence (1-10) before saving files, after saving, after rejections, and before task completion" Maintains quality through self-assessment. * **Project Structure** - _kvs007_ "Check project files before suggesting structural or dependency changes" Maintains project integrity. * **Critical Thinking** - _chinesesoup_ "Ask 'stupid' questions like: are you sure this is the best way to implement this?" Challenges assumptions and uncovers better solutions. * **Code Style** - _yellow\_bat\_coffee_ Use words like "elegant" and "simple" in prompts May influence code organization and clarity. * **Setting Expectations** - _steventcramer_ "THE HUMAN WILL GET ANGRY." (A humorous reminder to provide clear requirements and constructive feedback) Cline's system prompt, on the other hand, is not user-editable (). For a broader look at prompt engineering best practices, check out .  --- ## Page: https://docs.cline.bot/improving-your-prompting-skills/cline-memory-bank To get started with Cline Memory Bank: 1. **Install or Open Cline** 2. **Copy the Custom Instructions** - Use the code block below 3. **Paste into Cline** - Add as custom instructions or in a .clinerules file 4. **Initialize** - Ask Cline to "initialize memory bank" # Cline's Memory Bank I am Cline, an expert software engineer with a unique characteristic: my memory resets completely between sessions. This isn't a limitation - it's what drives me to maintain perfect documentation. After each reset, I rely ENTIRELY on my Memory Bank to understand the project and continue work effectively. I MUST read ALL memory bank files at the start of EVERY task - this is not optional. ## Memory Bank Structure The Memory Bank consists of core files and optional context files, all in Markdown format. Files build upon each other in a clear hierarchy: flowchart TD PB[projectbrief.md] --> PC[productContext.md] PB --> SP[systemPatterns.md] PB --> TC[techContext.md] PC --> AC[activeContext.md] SP --> AC TC --> AC AC --> P[progress.md] ### Core Files (Required) 1. `projectbrief.md` - Foundation document that shapes all other files - Created at project start if it doesn't exist - Defines core requirements and goals - Source of truth for project scope 2. `productContext.md` - Why this project exists - Problems it solves - How it should work - User experience goals 3. `activeContext.md` - Current work focus - Recent changes - Next steps - Active decisions and considerations - Important patterns and preferences - Learnings and project insights 4. `systemPatterns.md` - System architecture - Key technical decisions - Design patterns in use - Component relationships - Critical implementation paths 5. `techContext.md` - Technologies used - Development setup - Technical constraints - Dependencies - Tool usage patterns 6. `progress.md` - What works - What's left to build - Current status - Known issues - Evolution of project decisions ### Additional Context Create additional files/folders within memory-bank/ when they help organize: - Complex feature documentation - Integration specifications - API documentation - Testing strategies - Deployment procedures ## Core Workflows ### Plan Mode flowchart TD Start[Start] --> ReadFiles[Read Memory Bank] ReadFiles --> CheckFiles{Files Complete?} CheckFiles -->|No| Plan[Create Plan] Plan --> Document[Document in Chat] CheckFiles -->|Yes| Verify[Verify Context] Verify --> Strategy[Develop Strategy] Strategy --> Present[Present Approach] ### Act Mode flowchart TD Start[Start] --> Context[Check Memory Bank] Context --> Update[Update Documentation] Update --> Execute[Execute Task] Execute --> Document[Document Changes] ## Documentation Updates Memory Bank updates occur when: 1. Discovering new project patterns 2. After implementing significant changes 3. When user requests with **update memory bank** (MUST review ALL files) 4. When context needs clarification flowchart TD Start[Update Process] subgraph Process P1[Review ALL Files] P2[Document Current State] P3[Clarify Next Steps] P4[Document Insights & Patterns] P1 --> P2 --> P3 --> P4 end Start --> Process Note: When triggered by **update memory bank**, I MUST review every memory bank file, even if some don't require updates. Focus particularly on activeContext.md and progress.md as they track current state. REMEMBER: After every memory reset, I begin completely fresh. The Memory Bank is my only link to previous work. It must be maintained with precision and clarity, as my effectiveness depends entirely on its accuracy. The Memory Bank is a structured documentation system that allows Cline to maintain context across sessions. It transforms Cline from a stateless assistant into a persistent development partner that can effectively "remember" your project details over time. * **Context Preservation**: Maintain project knowledge across sessions * **Consistent Development**: Experience predictable interactions with Cline * **Self-Documenting Projects**: Create valuable project documentation as a side effect * **Scalable to Any Project**: Works with projects of any size or complexity * **Technology Agnostic**: Functions with any tech stack or language The Memory Bank isn't a Cline-specific feature - it's a methodology for managing AI context through structured documentation. When you instruct Cline to "follow custom instructions," it reads the Memory Bank files to rebuild its understanding of your project. Memory Bank files are simply markdown files you create in your project. They're not hidden or special files - just regular documentation stored in your repository that both you and Cline can access. Files are organized in a hierarchical structure that builds up a complete picture of your project: 1. **projectbrief.md** * The foundation of your project * High-level overview of what you're building * Core requirements and goals * Example: "Building a React web app for inventory management with barcode scanning" 2. **productContext.md** * Explains why the project exists * Describes the problems being solved * Outlines how the product should work * Example: "The inventory system needs to support multiple warehouses and real-time updates" 3. **activeContext.md** * The most frequently updated file * Contains current work focus and recent changes * Tracks active decisions and considerations * Stores important patterns and learnings * Example: "Currently implementing the barcode scanner component; last session completed the API integration" 4. **systemPatterns.md** * Documents the system architecture * Records key technical decisions * Lists design patterns in use * Explains component relationships * Example: "Using Redux for state management with a normalized store structure" 5. **techContext.md** * Lists technologies and frameworks used * Describes development setup * Notes technical constraints * Records dependencies and tool configurations * Example: "React 18, TypeScript, Firebase, Jest for testing" 6. **progress.md** * Tracks what works and what's left to build * Records current status of features * Lists known issues and limitations * Documents the evolution of project decisions * Example: "User authentication complete; inventory management 80% complete; reporting not started" Create additional files when needed to organize: * Complex feature documentation * Integration specifications * API documentation * Testing strategies * Deployment procedures 1. Create a `memory-bank/` folder in your project root 2. Have a basic project brief ready (can be technical or non-technical) 3. Ask Cline to "initialize memory bank" * Start simple - it can be as detailed or high-level as you like * Focus on what matters most to you * Cline will help fill in gaps and ask questions * You can update it as your project evolves **Plan Mode** Start in this mode for strategy discussions and high-level planning. **Act Mode** Use this for implementation and executing specific tasks. * **"follow your custom instructions"** - This tells Cline to read the Memory Bank files and continue where you left off (use this at the start of tasks) * **"initialize memory bank"** - Use when starting a new project * **"update memory bank"** - Triggers a full documentation review and update during a task * Toggle Plan/Act modes based on your current needs Memory Bank updates should automatically occur when: 1. You discover new patterns in your project 2. After implementing significant changes 3. When you explicitly request with **"update memory bank"** 4. When you feel context needs clarification The Memory Bank files are regular markdown files stored in your project repository, typically in a `memory-bank/` folder. They're not hidden system files - they're designed to be part of your project documentation. Either approach works - it's based on your preference: * **Custom Instructions**: Applied globally to all Cline conversations. Good for consistent behavior across all projects. * **.clinerules file**: Project-specific and stored in your repository. Good for per-project customization. Both methods achieve the same goal - the choice depends on whether you want global or local application of the Memory Bank system. As you work with Cline, your context window will eventually fill up (note the progress bar). When you notice Cline's responses slowing down or references to earlier parts of the conversation becoming less accurate, it's time to: 1. Ask Cline to **"update memory bank"** to document the current state 2. Start a new conversation/task 3. Ask Cline to **"follow your custom instructions"** in the new conversation This workflow ensures that important context is preserved in your Memory Bank files before the context window is cleared, allowing you to continue seamlessly in a fresh conversation. Update the Memory Bank after significant milestones or changes in direction. For active development, updates every few sessions can be helpful. Use the **"update memory bank"** command when you want to ensure all context is preserved. However, you will notice Cline automatically updating the Memory Bank as well. Yes! The Memory Bank concept is a documentation methodology that can work with any AI assistant that can read documentation files. The specific commands might differ, but the structured approach to maintaining context works across tools. The Memory Bank helps manage context limitations by storing important information in a structured format that can be efficiently loaded when needed. This prevents context bloat while ensuring critical information is available. Absolutely! The Memory Bank approach works for any project that benefits from structured documentation - from writing books to planning events. The file structure might vary, but the concept remains powerful. While similar in concept, the Memory Bank provides a more structured and comprehensive approach specifically designed to maintain context across AI sessions. It goes beyond what a single README typically covers. * Start with a basic project brief and let the structure evolve * Let Cline help create the initial structure * Review and adjust files as needed to match your workflow * Let patterns emerge naturally as you work * Don't force documentation updates - they should happen organically * Trust the process - the value compounds over time * Watch for context confirmation at the start of sessions * **projectbrief.md** is your foundation * **activeContext.md** changes most frequently * **progress.md** tracks your milestones * All files collectively maintain project intelligence 1. Open VSCode 2. Click the Cline extension settings ⚙️ 3. Find "Custom Instructions" 4. Copy and paste the complete Memory Bank instructions from the top of this guide 1. Create a `.clinerules` file in your project root 2. Copy and paste the Memory Bank instructions from the top of this guide 3. Save the file 4. Cline will automatically apply these rules when working in this project The Memory Bank is Cline's only link to previous work. Its effectiveness depends entirely on maintaining clear, accurate documentation and confirming context preservation in every interaction. * * * This guide is maintained by the Cline and the Cline Discord Community: * nickbaumann98 * Krylo * snipermunyshotz * * * _The Memory Bank methodology is an open approach to AI context management and can be adapted to different tools and workflows._ _For more information, reference our_ _on Cline Memory Bank_ ![flowchart LR A[Session Starts] --> B[Read Memory Bank Files] B --> C[Rebuild Context] C --> D[Continue Work] D --> E[Update Documentation] E --> F[Session Ends] F -.-> A](https://docs.cline.bot/~gitbook/image?url=https%3A%2F%2F3321249260-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252Ff8Oh1Lcy6yWYq1caYESV%252Fuploads%252FJnxDKnHwFc180rVhBIu9%252Fimage.png%3Falt%3Dmedia%26token%3D3fa2f84e-e158-48b1-9548-20be5e191c41&width=768&dpr=4&quality=100&sign=fbd52a9b&sv=2) ![flowchart TD PB[projectbrief.md] --> PC[productContext.md] PB --> SP[systemPatterns.md] PB --> TC[techContext.md] PC --> AC[activeContext.md] SP --> AC TC --> AC AC --> P[progress.md]](https://docs.cline.bot/~gitbook/image?url=https%3A%2F%2F3321249260-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252Ff8Oh1Lcy6yWYq1caYESV%252Fuploads%252Flh2tPJtrViHchynBAsU8%252Fimage.png%3Falt%3Dmedia%26token%3D59ca7fe6-d38a-4017-9aec-616851468f28&width=768&dpr=4&quality=100&sign=8479ab96&sv=2)   --- ## Page: https://docs.cline.bot/exploring-clines-tools/cline-tools-guide 1. Exploring Cline's Tools ## Cline Tools Guide Cline is your AI assistant that can: * Edit and create files in your project * Run terminal commands * Search and analyze your code * Help debug and fix issues * Automate repetitive tasks * Integrate with external tools 1. **Start a Task** * Type your request in the chat * Example: "Create a new React component called Header" 2. **Provide Context** * Use @ mentions to add files, folders, or URLs * Example: "@file:src/components/App.tsx" 3. **Review Changes** * Cline will show diffs before making changes * You can edit or reject changes 1. **File Editing** * Create new files * Modify existing code * Search and replace across files 2. **Terminal Commands** * Run npm commands * Start development servers * Install dependencies 3. **Code Analysis** * Find and fix errors * Refactor code * Add documentation 4. **Browser Integration** * Test web pages * Capture screenshots * Inspect console logs Cline has access to the following tools for various tasks: 1. **File Operations** * `write_to_file`: Create or overwrite files * `read_file`: Read file contents * `replace_in_file`: Make targeted edits to files * `search_files`: Search files using regex * `list_files`: List directory contents 2. **Terminal Operations** * `execute_command`: Run CLI commands * `list_code_definition_names`: List code definitions 3. **MCP Tools** * `use_mcp_tool`: Use tools from MCP servers * `access_mcp_resource`: Access MCP server resources * Users can create custom MCP tools that Cline can then access * Example: Create a weather API tool that Cline can use to fetch forecasts 4. **Interaction Tools** * `ask_followup_question`: Ask user for clarification * `attempt_completion`: Present final results Each tool has specific parameters and usage patterns. Here are some examples: * Create a new file (write\_to\_file): <write_to_file> <path>src/components/Header.tsx</path> <content> // Header component code </content> </write_to_file> * Search for a pattern (search\_files): <search_files> <path>src</path> <regex>function\s+\w+\(</regex> <file_pattern>*.ts</file_pattern> </search_files> * Run a command (execute\_command): <execute_command> <command>npm install axios</command> <requires_approval>false</requires_approval> </execute_command> 1. **Create a New Component** * "Create a new React component called Footer" 2. **Fix a Bug** * "Fix the error in src/utils/format.ts" 3. **Refactor Code** * "Refactor the Button component to use TypeScript" 4. **Run Commands** * "Run npm install to add axios" * Check the documentation * Provide feedback to improve Cline Last updated 2 months ago For the most up-to-date implementation details, you can view the full source code in the . --- ## Page: https://docs.cline.bot/exploring-clines-tools/checkpoints 1. Exploring Cline's Tools ## Checkpoints When working with AI coding assistants, it's easy to lose control as they make rapid changes to your codebase. That's why we built Checkpoints - your safety net for experimenting confidently. Last updated 22 days ago Checkpoints automatically save snapshots of your workspace after each step in a task. This powerful feature lets you: * Track and review changes made during a task * Roll back to any previous point if needed * Experiment confidently with auto-approve mode * Maintain full control over your workspace Cline creates a checkpoint after each tool use (file edits, commands, etc.). These checkpoints: * Work alongside your Git workflow without interference * Maintain context between restores * Use a shadow Git repository to track changes For example, if you're working on a feature and Cline makes multiple file changes, each change creates a checkpoint. This means you can review each modification and, if needed, roll back to any point without affecting your main Git repository. After each tool use, you can: 1. Click the "Compare" button to see modified files 2. Click the "Restore" button to open restore options To restore to a previous point: 1. Click the "Restore" button next to any step 2. Choose from three options: * **Restore Task and Workspace**: Reset both codebase and task to that point * **Restore Task Only**: Keep codebase changes but revert task context * **Restore Workspace Only**: Reset codebase while preserving task context Example: If Cline makes changes you don't like while styling a component, you can use "Restore Workspace Only" to revert the code changes while keeping the conversation context, allowing you to try a different approach. Checkpoints let you be more experimental with Cline. While human coding is often methodical and iterative, AI can make substantial changes quickly. Checkpoints help you track these changes and revert if needed. * Provides safety net for rapid iterations * Makes it easy to undo unexpected results * Try multiple solutions confidently * Compare different implementations * Quickly revert to working states * Ideal for exploring different design patterns or architectural approaches 1. Use checkpoints as safety nets when experimenting 2. Leverage auto-approve mode more confidently, knowing you can always roll back 3. Restore selectively based on needs: * Use "Restore Task and Workspace" for a fresh start, reversing changes to files and the task conversation. * Use "Restore Task Only" to try different prompts, but leave all files as they exist * Use "Restore Workspace Only" to attempt different implementations, or prune context from the task 🛟 Checkpoints are your safety net when working with Cline, enabling you to experiment freely while maintaining full control over your codebase. Whether you're refactoring a complex component, trying different implementation approaches, or using auto-approve mode for rapid development, checkpoints ensure you can always review changes and roll back if needed. You can delete all checkpoints by using the **"Delete All History"** button in the task history menu. Note that this will also delete all tasks. Checkpoints are stored in VS Code's globalStorage.   --- ## Page: https://docs.cline.bot/exploring-clines-tools/plan-and-act-modes-a-guide-to-effective-ai-development Plan & Act modes represent Cline's approach to structured AI development, emphasizing thoughtful planning before implementation. This dual-mode system helps developers create more maintainable, accurate code while reducing iteration time. * Optimized for context gathering and strategy * Cannot make changes to your codebase * Focused on understanding requirements and creating implementation plans * Enables full file reading for comprehensive project understanding * Streamlined for implementation based on established plans * Has access to all of Cline's building capabilities * Maintains context from the planning phase * Can execute changes to your codebase Begin every significant development task in Plan mode: In this mode: * Share your requirements * Let Cline analyze relevant files * Engage in dialogue to clarify objectives * Develop implementation strategy Once you have a clear plan, switch to Act mode: Act mode allows Cline to: * Execute against the agreed plan * Make changes to your codebase * Maintain context from planning phase Complex projects often require multiple plan-act cycles: * Return to Plan mode when encountering unexpected complexity * Use Act mode for implementing solutions * Maintain development momentum while ensuring quality 1. Be comprehensive with requirements 2. Share relevant context upfront 3. Point Cline to relevant files if he hasn't read them 4. Validate approach before implementation 1. Follow the established plan 2. Monitor progress against objectives 3. Track changes and their impact 4. Document significant decisions * Use Plan mode to explore edge cases before implementation * Switch back to Plan when encountering unexpected complexity * Leverage file reading to validate assumptions early * Have Cline write markdown files of the plan for future reference * Starting new features * Debugging complex issues * Architectural decisions * Requirements analysis * Implementing agreed solutions * Making routine changes * Following established patterns * Executing test cases Share your experiences and improvements: * Participate in discussions * Submit feature requests * Report issues * * * Remember: The time invested in planning pays dividends in implementation quality and maintenance efficiency Join our        --- ## Page: https://docs.cline.bot/exploring-clines-tools/remote-browser-support 1. Exploring Cline's Tools ## Remote Browser Support Remote browser support allows Cline to utilize a remote Chrome instance, leveraging authentication tokens and session cookies relevant to certain web development test cases. The Remote Browser feature in Cline allows the AI assistant to interact with web content directly through a controlled browser instance. This enables several powerful capabilities: * Viewing and interacting with websites * Testing locally running web applications * Monitoring console logs and errors * Performing browser actions like clicking, typing, and scrolling Remote Browser allows Cline to view and interact with websites directly. This feature enables Cline to: * Visit websites and view their content * Test your locally running web applications * Fill out forms and click on elements * Capture screenshots of what it sees * Scroll through pages to see more content You can ask Cline to use the browser with simple instructions: * **Click on elements**: "Click the login button" * **Type text**: "Type 'Hello world' in the search box" * **Scroll the page**: "Scroll down to see more content" * **Close the browser**: "Close the browser now" **Testing a Web Application:** Can you start my React app with "npm start" and then check if it's working properly at http://localhost:3000? **Analyzing a Website:** Can you visit https://example.com and tell me what you think about its design and layout? **Filling Out a Form:** Please go to https://example.com/contact, fill out the contact form with some test data, and submit it. Cline can only use one browser at a time. If you want to visit a different website, you can either: * Ask Cline to navigate to a new URL within the same browser session * Ask Cline to close the current browser and open a new one If you want Cline to edit files or run commands after using the browser, you must first ask it to close the browser: Close the browser and then update the CSS file to fix the alignment issue we saw. The browser has a fixed viewport size (900x600 pixels by default), similar to a small laptop screen. Cline will share screenshots after each action so you can see exactly what it sees. Cline captures browser console logs, which can be helpful for debugging web applications. These logs are included with each screenshot. * **Web Development**: Test your websites and web applications * **UI/UX Review**: Get feedback on website design and usability * **Content Research**: Have Cline browse websites to gather information * **Form Testing**: Verify that forms work correctly * **Responsive Design Testing**: Check how websites look at different screen sizes * **If a website doesn't load**: Try providing a direct URL with the http:// or https:// prefix * **If clicking doesn't work**: Try describing the location of the element more precisely * **If the browser seems stuck**: Ask Cline to close the browser and try again When running VS Code in WSL, you'll need to configure Windows to allow WSL to connect to Chrome. Follow these steps: # Allow WSL to connect to Chrome's debugging port New-NetFirewallRule -DisplayName "WSL Chrome Debug" -Direction Inbound -LocalPort 9222 -Protocol TCP -Action Allow 1. Open VS Code settings 2. Search for "Cline: Chrome Executable Path" 3. Set the value to the path of your Chrome executable (e.g., `C:\Program Files\Google\Chrome\Application\chrome.exe`) Cline should now be able to use the Remote Browser feature from within WSL. Last updated 2 days ago **Open a website**: "Use the browser to check the website at " --- ## Page: https://docs.cline.bot/enterprise-solutions/security-concerns * * * Cline operates exclusively as a client-side VSCode extension with zero server-side components. This fundamental design choice ensures that your code and data remain within your secure environment at all times. Unlike traditional AI assistants that send data to external servers for processing, Cline connects directly to your chosen cloud provider's AI endpoints, keeping all sensitive information within your infrastructure boundaries. Cline implements a strict zero data retention policy, meaning your intellectual property never leaves your secure environment. The extension does not collect, store, or transmit your code to any central servers. This approach significantly reduces potential attack vectors that might otherwise be introduced through data transmission to third-party systems. Telemetry collection is optional and requires explicit consent. Enterprise teams can access cutting-edge AI models through their existing cloud deployments. Cline supports seamless integration with: * AWS Bedrock * Google Cloud Vertex AI * Microsoft Azure These integrations utilize your organization's existing security credentials, including native IAM role assumption for AWS. This ensures that all AI processing occurs within your corporate cloud environment, maintaining compliance with your established security protocols. Cline's codebase is completely open-source, allowing for comprehensive security auditing by your internal teams. This transparency enables security professionals to verify exactly how the extension functions and confirm that it adheres to your organization's security requirements. Organizations can review the code to ensure it aligns with their security policies before deployment. The extension implements safeguards against unauthorized changes to your codebase. Cline requires explicit user approval for all file modifications and terminal commands, preventing accidental or unwanted alterations. This approval-based workflow maintains the integrity of your projects while still providing AI assistance. For organizations with strict security review processes, Cline provides comprehensive documentation including detailed deployment diagrams, sequence diagrams illustrating all data flows, and complete security posture documentation. These materials facilitate thorough security reviews and help demonstrate compliance with enterprise data handling standards and regulations. Enterprise editions of Cline (planned for Q2 2025) will include centralized administration features that allow organizations to: * Manage user access with customizable permission levels * Provision accounts with corporate credentials * Immediately revoke access when needed * Control which AI providers and LLM endpoints can be used * Deploy standardized settings across the organization * Prevent unauthorized use of personal API keys Cline's architecture supports compliance with data sovereignty requirements and enterprise data handling regulations. The planned Enterprise Complete edition will further enhance governance with detailed audit logging, compliance reporting, and automated policy enforcement mechanisms. By combining client-side processing, direct cloud provider integration, and transparent operations, Cline offers enterprise teams a secure way to leverage AI assistance while maintaining strict control over their sensitive code and data.  --- ## Page: https://docs.cline.bot/enterprise-solutions/cloud-provider-integration Cline supports major cloud providers like AWS Bedrock and Google's Cloud Vertex; whichever your team currently uses is appropriate, and there's no need to change providers to utilize Cline's features. For the purpose of this document, we assume your organization will use cloud-based frontier models. Cloud inference providers offer cutting-edge capabilities and the flexibility to select models which best suit your needs. Certain scenarios may warrant using local models, including handling highly sensitive data, applications requiring consistent low-latency responses, or compliance with strict data sovereignty requirements. If your team needs to utilize local models, see with Cline. * * * To protect your team's data, Cline supports VPC (Virtual Private Cloud) endpoints, which create private connections between your data and AWS Bedrock. AWS VPCs enhance security by eliminating the need for public IP addresses, network gateways, or complex firewall rules—essentially creating a private highway for data that bypasses the public internet entirely. By keeping traffic within AWS’s private network, teams also benefit from lower latency and more predictable performance when accessing services like AWS Bedrock or custom APIs. For those working with confidential information or operating in highly regulated industries like healthcare or finance, VPCs offers the perfect balance between the accessibility of cloud services and the security of private infrastructure. * * * 1. Consult the to creating VPC endpoints. This document specifies pre-requisites and describes the syntax used for creating VPC endpoints. 2. Follow the directions for in the AWS console. The image below pertains to steps 4 and 5 of the AWS guide linked above. 3. Note the IP address of your VPC endpoint, open Cline's settings menu, and select `AWS Bedrock`from the API Provider dropdown. 4. Click the `Use Custom VPC endpoint`checkbox and enter the IP address of your VPC endpoint   --- ## Page: https://docs.cline.bot/enterprise-solutions/mcp-servers **Model Context Protocol (MCP) servers expand Cline's capabilities by providing standardized access to external data sources and executable functions. By implementing MCP servers, LLM tools can dynamically retrieve and incorporate relevant information from both local and remote data sources. This capability ensures that the models operate with the most current and contextually appropriate data, improving the accuracy and relevance of their outputs.** * * * MCP servers follow a client-server architecture where hosts (LLM applications like Cline) initiate connections through a transport layer to MCP servers. This architecture inherently provides security benefits as it maintains clear separation between components. Enterprise deployments should focus on the proper implementation of this architecture to ensure secure operations, particularly regarding the message exchange patterns and connection lifecycle management. For MCP architecture details, see , and for latest specifications, see . For enterprise environments, selecting the appropriate transport mechanism is crucial. While stdio transport works efficiently for local processes, HTTP with Server-Sent Events (SSE) transport requires additional security measures. TLS should be used for all remote connections whenever possible. This is especially important when MCP servers are deployed across different network segments within corporate infrastructure. The MCP architecture defines standard error codes and message types (Requests, Results, Errors, and Notifications), providing a structured framework for secure communication. Security teams should consider message validation, sanitizing inputs, checking message size limits, and verifying JSON-RPC format. Additionally, implementing resource protection through access controls, path validation, and request rate limiting helps prevent potential abuse of MCP server capabilities. For enterprise compliance requirements, implementing comprehensive logging of protocol events, message flows, and errors is essential. The MCP architecture supports diagnostic capabilities including health checks, connection state monitoring, and resource usage tracking. Organizations should extend these capabilities to meet their specific compliance needs, particularly for audit trails of all MCP server interactions and resource access patterns. By leveraging the client-server design of the MCP architecture and implementing appropriate security controls at each layer, enterprises can safely integrate MCP servers into their environments while maintaining their security posture and meeting regulatory requirements. --- ## Page: https://docs.cline.bot/enterprise-solutions/custom-instructions 1. Enterprise Solutions ## Custom Instructions **Creating standardized project instructions ensures that all team members work within consistent guidelines. Start by documenting your project's technical foundation, then identify which information needs to be included in the instructions. The exact scope will vary depending on your team's needs, but generally it's best to provide as much information as possible. By creating comprehensive instructions that all team members follow, you establish a shared understanding of how code should be written, tested, and deployed across your project, resulting in more maintainable and consistent software.** * * * Here are a few topics and examples to consider for your team's custom instructions: 1. **Testing framework and specific commands** * "All components must include Jest tests with at least 85% coverage. Run tests using `npm run test:coverage` before submitting any pull request." 2. **Explicit library preferences** * "Use React Query for data fetching and state management. Avoid Redux unless specifically required for complex global state. For styling, use Tailwind CSS with our custom theme configuration found in `src/styles/theme.js.`" 3. **Where to find documentation** * "All API documentation is available in our internal Notion workspace under 'Engineering > API Reference'. For component usage examples, refer to our Storybook instance at `https://storybook.internal.company.com`" 4. **Which MCP servers to use, and for which purposes** * "For database operations, use the Postgres MCP server with credentials stored in 1Password under 'Development > Database'. For deployments, use the AWS MCP server which requires the deployment role from IAM. Refer to `docs/mcp-setup.md` for configuration instructions." 5. **Coding conventions specific to your project** * "Name all React components using PascalCase and all helper functions using camelCase. Place components in the `src/components` directory organized by feature, not by type. Always use TypeScript interfaces for prop definitions." Last updated 11 days ago --- ## Page: https://docs.cline.bot/mcp-servers/mcp **Quick Links:** This document explains Model Context Protocol (MCP) servers, their capabilities, and how Cline can help build and use them. Model Context Protocol is an open protocol that standardizes how applications provide context to LLMs. Think of MCP like a USB-C port for AI applications; it provides a standardized way to connect AI models to different data sources and tools. MCP servers act as intermediaries between large language models (LLMs), such as Claude, and external tools or data sources. They are small programs that expose functionalities to LLMs, enabling them to interact with the outside world through the MCP. An MCP server is essentially like an API that an LLM can use. MCP servers define a set of "**tools,**" which are functions the LLM can execute. These tools offer a wide range of capabilities. **Here's how MCP works:** * **MCP hosts** discover the capabilities of connected servers and load their tools, prompts, and resources. * **Resources** provide consistent access to read-only data, akin to file paths or database queries. * **Security** is ensured as servers isolate credentials and sensitive data. Interactions require explicit user approval. The potential of MCP servers is vast. They can be used for a variety of purposes. **Here are some concrete examples of how MCP servers can be used:** * **Web Services and API Integration:** * Monitor GitHub repositories for new issues * Post updates to Twitter based on specific triggers * Retrieve real-time weather data for location-based services * **Browser Automation:** * Automate web application testing * Scrape e-commerce sites for price comparisons * Generate screenshots for website monitoring * **Database Queries:** * Generate weekly sales reports * Analyze customer behavior patterns * Create real-time dashboards for business metrics * **Project and Task Management:** * Automate Jira ticket creation based on code commits * Generate weekly progress reports * Create task dependencies based on project requirements * **Codebase Documentation:** * Generate API documentation from code comments * Create architecture diagrams from code structure * Maintain up-to-date README files Cline does not come with any pre-installed MCP servers. You'll need to find and install them separately. **Choose the right approach for your needs:** * **Ask Cline:** You can ask Cline to help you find or create MCP servers * **Customize Existing Servers:** Modify existing servers to fit your specific requirements Cline simplifies the building and use of MCP servers through its AI capabilities. * **Natural language understanding:** Instruct Cline in natural language to build an MCP server by describing its functionalities. Cline will interpret your instructions and generate the necessary code. * **Cloning and building servers:** Cline can clone existing MCP server repositories from GitHub and build them automatically. * **Configuration and dependency management:** Cline handles configuration files, environment variables, and dependencies. * **Troubleshooting and debugging:** Cline helps identify and resolve errors during development. * **Tool execution:** Cline seamlessly integrates with MCP servers, allowing you to execute their defined tools. * **Context-aware interactions:** Cline can intelligently suggest using relevant tools based on conversation context. * **Dynamic integrations:** Combine multiple MCP server capabilities for complex tasks. For example, Cline could use a GitHub server to get data and a Notion server to create a formatted report. When working with MCP servers, it's important to follow security best practices: * **Authentication:** Always use secure authentication methods for API access * **Environment Variables:** Store sensitive information in environment variables * **Access Control:** Limit server access to authorized users only * **Data Validation:** Validate all inputs to prevent injection attacks * **Logging:** Implement secure logging practices without exposing sensitive data There are various resources available for finding and learning about MCP servers. **Here are some links to resources for finding and learning about MCP servers:** **Community Repositories:** Check for community-maintained lists of MCP servers on GitHub. See **Cline Marketplace:** Install one from Cline's **Build Your Own:** Create custom MCP servers using the **GitHub Repositories:** and **Online Directories:** , , and **PulseMCP:** **YouTube Tutorial (AI-Driven Coder):** A video guide for building and using MCP servers:  --- ## Page: https://docs.cline.bot/mcp-servers/mcp-quickstart 1. MCP Servers ## MCP Marketplace MCP servers are specialized extensions that enhance Cline's capabilities. They enable Cline to perform additional tasks like fetching web pages, processing images, accessing APIs, and much more. The MCP Marketplace provides a one-click installation experience for hundreds of MCP servers across various categories. * In Cline, click the "Extensions" button (square icon) in the top toolbar * The MCP marketplace will open, showing available servers by category * Browse servers by category (Search, File-systems, Browser-automation, Research-data, etc.) * Click on a server to see details about its capabilities and requirements * Click the install button for your chosen server * If the server requires an API key (most do), Cline will guide you through: * Where to get the API key * How to enter it securely * The server will be added to your MCP settings automatically * Cline will show confirmation when installation is complete * Check the server status in Cline's MCP settings UI * After successful installation, Cline will automatically integrate the server's capabilities * You'll see new tools and resources available in Cline's system prompt * Simply ask Cline to use the capabilities of your new server * Example: "Search the web for recent React updates using Perplexity" **Corporate Users:** If you're using Cline in a corporate environment, ensure you have permission to install third-party MCP servers according to your organization's security policies. When you install an MCP server, several things happen automatically: * The server code is cloned/installed to `/Users/<username>/Documents/Cline/MCP/` * Dependencies are installed * The server is built (TypeScript/JavaScript compilation or Python package installation) * The MCP settings file is updated with your server configuration * This file is located at: `/Users/<username>/Library/Application Support/Code/User/globalStorage/saoudrizwan.claude-dev/settings/cline_mcp_settings.json` * Environment variables (like API keys) are securely stored * The server path is registered * Cline detects the configuration change * Cline launches your server as a separate process * Communication is established via stdio or HTTP * Your server's capabilities are added to Cline's system prompt * Tools become available via `use_mcp_tool` commands * Resources become available via `access_mcp_resource` commands * Cline can now use these capabilities when prompted by the user Make sure your system meets these requirements: * **Node.js 18.x or newer** * Check by running: `node --version` * Install from: https://nodejs.org/ * Required for JavaScript/TypeScript implementations * **Python 3.10 or newer** * Check by running: `python --version` * Install from: https://python.org/ * Note: Some specialized implementations may require Python 3.11+ * **UV Package Manager** * Modern Python package manager for dependency isolation * Install using: curl -LsSf https://astral.sh/uv/install.sh | sh Or: `pip install uv` * Verify with: `uv --version` If any of these commands fail or show older versions, please install/update before continuing! * Ensure your internet connection is stable * Check that you have the necessary permissions to install new software * Verify that the API key was entered correctly (if required) * Check the server status in the MCP settings UI for any error messages To completely remove a faulty MCP server: 1. Open the MCP settings file: `/Users/<username>/Library/Application Support/Code/User/globalStorage/saoudrizwan.claude-dev/settings/cline_mcp_settings.json` 2. Delete the entire entry for your server from the `mcpServers` object 3. Save the file 4. Restart Cline If you're getting an error when using an MCP server, you can try the following: * Check the MCP settings file for errors * Use a Claude Sonnet model for installation * Verify that paths to your server's files are correct * Ensure all required environment variables are set * Check if another process is using the same port (for HTTP-based servers) * Try removing and reinstalling the server (remove from both the `cline_mcp_settings.json` file and the `/Users/<username>/Documents/Cline/MCP/` directory) * Use a terminal and run the command with its arguments directly. This will allow you to see the same errors that Cline is seeing Cline is already aware of your active MCP servers and what they are for, but when you have a lot of MCP servers enabled, it can be useful to define when to use each server. Utilize a `.clinerules` file or custom instructions to support intelligent MCP server activation through keyword-based triggers, making Cline's tool selection more intuitive and context-aware. MCP Rules group your connected MCP servers into functional categories and define trigger keywords that activate them automatically when detected in your conversations with Cline. { "mcpRules": { "webInteraction": { "servers": [ "firecrawl-mcp-server", "fetch-mcp" ], "triggers": [ "web", "scrape", "browse", "website" ], "description": "Tools for web browsing and scraping" } } } 1. **Categories**: Group related servers (e.g., "webInteraction", "mediaAndDesign") 2. **Servers**: List server names in each category 3. **Triggers**: Keywords that activate these servers 4. **Description**: Human-readable category explanation * **Contextual Tool Selection**: Cline selects appropriate tools based on conversation context * **Reduced Friction**: No need to manually specify which tool to use * **Organized Capabilities**: Logically group related tools and servers * **Prioritization**: Handle ambiguous cases with explicit priority ordering When you write "Can you scrape this website?", Cline detects "scrape" and "website" as triggers, automatically selecting web-related MCP servers. For finance tasks like "What's Apple's stock price?", keywords like "stock" and "price" trigger finance-related servers. { "mcpRules": { "category1": { "servers": [ "server-name-1", "server-name-2" ], "triggers": [ "keyword1", "keyword2", "phrase1", "phrase2" ], "description": "Description of what these tools do" }, "category2": { "servers": [ "server-name-3" ], "triggers": [ "keyword3", "keyword4", "phrase3" ], "description": "Description of what these tools do" }, "category3": { "servers": [ "server-name-4", "server-name-5" ], "triggers": [ "keyword5", "keyword6", "phrase4" ], "description": "Description of what these tools do" } }, "defaultBehavior": { "priorityOrder": [ "category1", "category2", "category3" ], "fallbackBehavior": "Ask user which tool would be most appropriate" } } Add this to your `.clinerules` file or to your custom instructions to make Cline's MCP server selection more intuitive and context-aware. Last updated 18 days ago --- ## Page: https://docs.cline.bot/mcp-servers/mcp-server-from-github 1. MCP Servers ## Adding MCP Servers from GitHub Last updated 22 days ago This guide provides a step-by-step walkthrough of how to use Cline to build an existing MCP server from a GitHub repository. There are multiple places online to find MCP servers: * **Cline can automatically add MCP servers to its list, which you can then edit.** Cline can clone repositories directly from GitHub and build the servers for you. * **GitHub:** Two of the most common places to find MCP servers on GitHub include: * **Online directories:** Several websites list MCP servers including: These directories allow users to sort the servers by various criteria such as downloads, date, stars, and use case. Each entry provides information such as features, tools, and configuration instructions. * **PulseMCP:** This website has a blog post discussing how AI could use MCP servers to make websites obsolete. PulseMCP also includes an FAQ section about MCP servers: 1. **Initiate the Process:** Provide Cline with the following information: * **GitHub Repository URL:** The URL of the server's repository. * **README.md Contents:** This is optional but helpful for Cline to understand the server's purpose and configuration. You can copy the README.md file from the GitHub repository. 2. **Example Interaction with Cline:** User: "Cline, I want to add the MCP server for Brave browser control. Here's the GitHub link: @https://github.com/modelcontextprotocol/servers/tree/main/src/brave-search Can you add it?" Cline: "OK. Cloning the repository to the MCP directory. It needs to be built because it has a 'package.json' file. Should I run 'npm run build'?" User: "Yes, build it." Cline: "Build complete. This server needs a browser path. Where's your Brave browser executable?" User: "It's in '/Applications/Brave Browser.app/Contents/MacOS/Brave Browser'." Cline: "OK. I've updated the server config with that path and added it to 'cline_mcp_settings.json'. Anything else?" User: "No, that's all. Let's test it." Cline: "Great! Starting the MCP Inspector to test the server connection. After that, we can try controlling the browser from Cline." 3. **Cline's Actions:** Based on your instructions, Cline will perform the following: * **Repository Cloning:** Cline will clone the repository to your local machine, usually in the directory specified in your configuration. * **Tweaking:** You can guide Cline to modify the server’s configuration. For instance: * **User:** "This server requires an API key. Can you find where it should be added?" * Cline may automatically update the `cline_mcp_settings.json` file or other relevant files based on your instructions. * **Building the Server:** Cline will run the appropriate build command for the server, which is commonly `npm run build`. * **Adding Server to Settings:** Cline will add the server’s configuration to the `cline_mcp_settings.json` file. 1. **Test the Server:** Once Cline finishes the build process, test the server to make sure it works as expected. Cline can assist you if you encounter any problems. 2. **MCP Inspector:** You can use the MCP Inspector to test the server’s connection and functionality. * **Understand the Basics:** While Cline simplifies the process, it’s beneficial to have a basic understanding of the server’s code, the MCP protocol (), and how to configure the server. This allows for more effective troubleshooting and customization. * **Clear Instructions:** Provide clear and specific instructions to Cline throughout the process. * **Testing:** Thoroughly test the server after installation and configuration to ensure it functions correctly. * **Version Control:** Use a version control system (like Git) to track changes to the server’s code. * **Stay Updated:** Keep your MCP servers updated to benefit from the latest features and security patches. --- ## Page: https://docs.cline.bot/mcp-servers/configuring-mcp-servers Utilizing MCP servers will increase your token useage. Cline offers the ability to restrict or disable MCP server functionality as desired. 1. Click the "MCP Servers" icon in the top navigation bar of the Cline extension. 2. Select the "Installed" tab, and then Click the "Advanced MCP Settings" link at the bottom of that pane. 3. Cline will open a new settings window. find `Cline>Mcp:Mode`and make your selection from the dropdown menu. Each MCP server has its own configuration panel where you can modify settings, manage tools, and control its operation. To access these settings: 1. Click the "MCP Servers" icon in the top navigation bar of the Cline extension. 2. Locate the MCP server you want to manage in the list, and open it by clicking on its name. 1. Click the Trash icon next to the MCP server you would like to delete, or the red Delete Server button at the bottom of the MCP server config box. NOTE: There is no delete confirmation dialog box 1. Click the Restart button next to the MCP server you would like to restart, or the gray Restart Server button at the bottom of the MCP server config box. 1. Click the toggle switch next to the MCP server to enable/disable servers individually. To set the maximum time to wait for a response after a tool call to the MCP server: 1. Click the `Network Timeout` dropdown at the bottom of the individual MCP server's config box and change the time. Default is 1 minute but it can be set between 30 seconds and 1 hour. Settings for all installed MCP servers are located in the `cline_mcp_settings.json`file : 1. Click the MCP Servers icon at the top navigation bar of the Cline pane. 2. Select the "Installed" tab. 3. Click the "Configure MCP Servers" button at the bottom of the pane. The file usees a JSON format with a `mcpServers` object containing named server configurations: { "mcpServers": { "server1": { "command": "python", "args": ["/path/to/server.py"], "env": { "API_KEY": "your_api_key" }, "alwaysAllow": ["tool1", "tool2"], "disabled": false } } } _Example of MCP Server config in Cline (STDIO Transport)_ * * * MCP supports two transport types for server communication: Used for local servers running on your machine: * Communicates via standard input/output streams * Lower latency (no network overhead) * Better security (no network exposure) * Simpler setup (no HTTP server needed) * Runs as a child process on your machine STDIO configuration example: { "mcpServers": { "local-server": { "command": "node", "args": ["/path/to/server.js"], "env": { "API_KEY": "your_api_key" }, "alwaysAllow": ["tool1", "tool2"], "disabled": false } } } Used for remote servers accessed over HTTP/HTTPS: * Communicates via Server-Sent Events protocol * Can be hosted on a different machine * Supports multiple client connections * Requires network access * Allows centralized deployment and management SSE configuration example: { "mcpServers": { "remote-server": { "url": "https://your-server-url.com/mcp", "headers": { "Authorization": "Bearer your-token" }, "alwaysAllow": ["tool3"], "disabled": false } } } * * * After configuring an MCP server, Cline will automatically detect available tools and resources. To use them: 1. Type your request in Cline's conversation window 2. Cline will identify when an MCP tool can help with your task 3. Approve the tool use when prompted (or use auto-approval) Example: "Analyze the performance of my API" might use an MCP tool that tests API endpoints. Common issues and solutions: * **Server Not Responding:** Check if the server process is running and verify network connectivity * **Permission Errors:** Ensure proper API keys and credentials are configured in your `mcp_settings.json` file. * **Tool Not Available:** Confirm the server is properly implementing the tool and it's not disabled in settings * **Slow Performance:** Try adjusting the network timeout value for the specific MCP server For more in-depth information about how STDIO transport works, see For more in-depth information about how SSE transport works, see    --- ## Page: https://docs.cline.bot/mcp-servers/connecting-to-a-remote-server 1. MCP Servers ## Connecting to a Remote Server The Model Context Protocol (MCP) allows Cline to communicate with external servers that provide additional tools and resources to extend its capabilities. This guide explains how to add and connect to remote MCP servers through the MCP Servers interface. To access the MCP Servers interface in Cline: 1. Click on the Cline icon in the VSCode sidebar 2. Open the menu (⋮) in the top right corner of the Cline panel 3. Select "MCP Servers" from the dropdown menu The MCP Servers interface is divided into three main tabs: * **Marketplace**: Discover and install pre-configured MCP servers (if enabled) * **Remote Servers**: Connect to existing MCP servers via URL endpoints * **Installed**: Manage your connected MCP servers The "Remote Servers" tab allows you to connect to any MCP server that's accessible via a URL endpoint: 1. Click on the "Remote Servers" tab in the MCP Servers interface 2. Fill in the required information: * **Server Name**: Provide a unique, descriptive name for the server * **Server URL**: Enter the complete URL endpoint of the MCP server (e.g., `https://example.com/mcp-sse`) 3. Click "Add Server" to initiate the connection 4. Cline will attempt to connect to the server and display the connection status > **Note**: When connecting to a remote server, ensure you trust the source, as MCP servers can execute code in your environment. If you're looking for MCP servers to connect to, several third-party marketplaces provide directories of available servers with various capabilities. > **Warning**: The following third-party marketplaces are listed for informational purposes only. Cline does not endorse, verify, or take responsibility for any servers listed on these marketplaces. These servers are cloud-hosted services that process your requests and may have access to data you share with them. Always review privacy policies and terms of use before connecting to third-party services. Smithery is a third-party MCP server marketplace that allows users to discover and connect to a variety of Model Context Protocol (MCP) servers. If you're using an MCP-compatible client (such as Cursor, Claude Desktop, or Cline), you can browse available servers and integrate them directly into your workflow. Please note: Smithery is maintained independently and is not affiliated with our project. Use at your own discretion. Once added, your MCP servers appear in the "Installed" tab where you can: Each server displays its current status: * **Green dot**: Connected and ready to use * **Yellow dot**: In the process of connecting * **Red dot**: Disconnected or experiencing errors Click on a server to expand its settings panel: 1. **Tools & Resources**: * View all available tools and resources from the server * Configure auto-approval settings for tools (if enabled) 2. **Request Timeout**: * Set how long Cline should wait for server responses * Options range from 30 seconds to 1 hour 3. **Server Management**: * **Restart Server**: Reconnect if the server becomes unresponsive * **Delete Server**: Remove the server from your configuration Toggle the switch next to each server to enable or disable it: * **Enabled**: Cline can use the server's tools and resources * **Disabled**: The server remains in your configuration but is not active If a server fails to connect: 1. An error message will be displayed with details about the failure 2. Check that the server URL is correct and the server is running 3. Use the "Restart Server" button to attempt reconnection 4. If problems persist, you can delete the server and try adding it again For advanced users, Cline stores MCP server configurations in a JSON file that can be modified: 1. In the "Installed" tab, click "Configure MCP Servers" to access the settings file 2. The configuration for each server follows this format: { "mcpServers": { "exampleServer": { "url": "https://example.com/mcp-sse", "disabled": false, "autoApprove": ["tool1", "tool2"], "timeout": 30 } } } Key configuration options: * **url**: The endpoint URL (for remote servers) * **disabled**: Whether the server is currently enabled (true/false) * **autoApprove**: List of tool names that don't require confirmation * **timeout**: Maximum time in seconds to wait for server responses For additional MCP settings, click the "Advanced MCP Settings" link to access VSCode settings. Once connected, Cline can use the tools and resources provided by the MCP server. When Cline suggests using an MCP tool: 1. A tool approval prompt will appear (unless auto-approved) 2. Review the tool details and parameters before approving 3. The tool will execute and return results to Cline Last updated 3 days ago provides access to a wide range of third-party servers that support the Model Context Protocol (MCP). These servers expose APIs for services like GitHub, Notion, Slack, and others. Each server includes configuration instructions and built-in authentication support (e.g. OAuth or API keys). To connect, locate the desired service in the marketplace and follow the integration steps provided there. To explore available options, visit the Smithery marketplace: --- ## Page: https://docs.cline.bot/mcp-servers/mcp-transport-mechanisms 1. MCP Servers ## MCP Transport Mechanisms Model Context Protocol (MCP) supports two primary transport mechanisms for communication between Cline and MCP servers: Standard Input/Output (STDIO) and Server-Sent Events (SSE). Each has distinct characteristics, advantages, and use cases. STDIO transport runs locally on your machine and communicates via standard input/output streams. 1. The client (Cline) spawns an MCP server as a child process 2. Communication happens through process streams: client writes to server's STDIN, server responds to STDOUT 3. Each message is delimited by a newline character 4. Messages are formatted as JSON-RPC 2.0 Client Server | | |<---- JSON message ----->| (via STDIN) | | (processes request) |<---- JSON message ------| (via STDOUT) | | * **Locality**: Runs on the same machine as Cline * **Performance**: Very low latency and overhead (no network stack involved) * **Simplicity**: Direct process communication without network configuration * **Relationship**: One-to-one relationship between client and server * **Security**: Inherently more secure as no network exposure STDIO transport is ideal for: * Local integrations and tools running on the same machine * Security-sensitive operations * Low-latency requirements * Single-client scenarios (one Cline instance per server) * Command-line tools or IDE extensions import { Server } from '@modelcontextprotocol/sdk/server/index.js'; import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js'; const server = new Server({name: 'local-server', version: '1.0.0'}); // Register tools... // Use STDIO transport const transport = new StdioServerTransport(server); transport.listen(); Server-Sent Events (SSE) transport runs on a remote server and communicates over HTTP/HTTPS. 1. The client (Cline) connects to the server's SSE endpoint via HTTP GET request 2. This establishes a persistent connection where the server can push events to the client 3. For client-to-server communication, the client makes HTTP POST requests to a separate endpoint 4. Communication happens over two channels: * Event Stream (GET): Server-to-client updates * Message Endpoint (POST): Client-to-server requests Client Server | | |---- HTTP GET /events ----------->| (establish SSE connection) |<---- SSE event stream -----------| (persistent connection) | | |---- HTTP POST /message --------->| (client request) |<---- SSE event with response ----| (server response) | | * **Remote Access**: Can be hosted on a different machine from your Cline instance * **Scalability**: Can handle multiple client connections concurrently * **Protocol**: Works over standard HTTP (no special protocols needed) * **Persistence**: Maintains a persistent connection for server-to-client messages * **Authentication**: Can use standard HTTP authentication mechanisms SSE transport is better for: * Remote access across networks * Multi-client scenarios * Public services * Centralized tools that many users need to access * Integration with web services import { Server } from '@modelcontextprotocol/sdk/server/index.js'; import { SSEServerTransport } from '@modelcontextprotocol/sdk/server/sse.js'; import express from 'express'; const app = express(); const server = new Server({name: 'remote-server', version: '1.0.0'}); // Register tools... // Use SSE transport const transport = new SSEServerTransport(server); app.use('/mcp', transport.requestHandler()); app.listen(3000, () => { console.log('MCP server listening on port 3000'); }); The choice between STDIO and SSE transports directly impacts how you'll deploy and manage your MCP servers. STDIO servers run locally on the same machine as Cline, which has several important implications: * **Installation**: The server executable must be installed on each user's machine * **Distribution**: You need to provide installation packages for different operating systems * **Updates**: Each instance must be updated separately * **Resources**: Uses the local machine's CPU, memory, and disk * **Access Control**: Relies on the local machine's filesystem permissions * **Integration**: Easy integration with local system resources (files, processes) * **Execution**: Starts and stops with Cline (child process lifecycle) * **Dependencies**: Any dependencies must be installed on the user's machine **Practical Example****** A local file search tool using STDIO would: * Run on the user's machine * Have direct access to the local filesystem * Start when needed by Cline * Not require network configuration * Need to be installed alongside Cline or via a package manager SSE servers can be deployed to remote servers and accessed over the network: * **Installation**: Installed once on a server, accessed by many users * **Distribution**: Single deployment serves multiple clients * **Updates**: Centralized updates affect all users immediately * **Resources**: Uses server resources, not local machine resources * **Access Control**: Managed through authentication and authorization systems * **Integration**: More complex integration with user-specific resources * **Execution**: Runs as an independent service (often continuously) * **Dependencies**: Managed on the server, not on user machines **Practical Example****** A database query tool using SSE would: * Run on a central server * Connect to databases with server-side credentials * Be continuously available for multiple users * Require proper network security configuration * Be deployed using container or cloud technologies Some scenarios benefit from a hybrid approach: 1. **STDIO with Network Access**: A local STDIO server that acts as a proxy to remote services 2. **SSE with Local Commands**: A remote SSE server that can trigger operations on the client machine through callbacks 3. **Gateway Pattern**: STDIO servers for local operations that connect to SSE servers for specialized functions **Location** Local machine only Local or remote **Clients** Single client Multiple clients **Performance** Lower latency Higher latency (network overhead) **Setup Complexity** Simpler More complex (requires HTTP server) **Security** Inherently secure Requires explicit security measures **Network Access** Not needed Required **Scalability** Limited to local machine Can distribute across network **Deployment** Per-user installation Centralized installation **Updates** Distributed updates Centralized updates **Resource Usage** Uses client resources Uses server resources **Dependencies** Client-side dependencies Server-side dependencies For detailed information on configuring STDIO and SSE transports in Cline, including examples, see Configuring MCP Servers. Last updated 3 days ago --- ## Page: https://docs.cline.bot/mcp-servers/mcp-server-from-scratch 1. MCP Servers ## MCP Server Development Protocol This protocol is designed to streamline the development process of building MCP servers with Cline. > 🚀 **Build and share your MCP servers with the world.** Once you've created a great MCP server, submit it to the Cline MCP Marketplace to make it discoverable and one-click installable by thousands of developers. Model Context Protocol (MCP) servers extend AI assistants like Cline by giving them the ability to: * Access external APIs and services * Retrieve real-time data * Control applications and local systems * Perform actions beyond what text prompts alone can achieve Without MCP, AI assistants are powerful but isolated. With MCP, they gain the ability to interact with virtually any digital system. The heart of effective MCP server development is following a structured protocol. This protocol is implemented through a `.clinerules` file that lives at the **root** of your MCP working directory (/Users/your-name/Documents/Cline/MCP). A `.clinerules` file is a special configuration that Cline reads automatically when working in the directory where it's placed. These files: * Configure Cline's behavior and enforce best practices * Switch Cline into a specialized MCP development mode * Provide a step-by-step protocol for building servers * Implement safety measures like preventing premature completion * Guide you through planning, implementation, and testing phases Here's the complete MCP Server Development Protocol that should be placed in your `.clinerules` file: # MCP Server Development Protocol ⚠️ CRITICAL: DO NOT USE attempt_completion BEFORE TESTING ⚠️ ## Step 1: Planning (PLAN MODE) - What problem does this tool solve? - What API/service will it use? - What are the authentication requirements? □ Standard API key □ OAuth (requires separate setup script) □ Other credentials ## Step 2: Implementation (ACT MODE) 1. Bootstrap - For web services, JavaScript integration, or Node.js environments: ```bash npx @modelcontextprotocol/create-server my-server cd my-server npm install ``` - For data science, ML workflows, or Python environments: ```bash pip install mcp # Or with uv (recommended) uv add "mcp[cli]" ``` 2. Core Implementation - Use MCP SDK - Implement comprehensive logging - TypeScript (for web/JS projects): ```typescript console.error('[Setup] Initializing server...'); console.error('[API] Request to endpoint:', endpoint); console.error('[Error] Failed with:', error); ``` - Python (for data science/ML projects): ```python import logging logging.error('[Setup] Initializing server...') logging.error(f'[API] Request to endpoint: {endpoint}') logging.error(f'[Error] Failed with: {str(error)}') ``` - Add type definitions - Handle errors with context - Implement rate limiting if needed 3. Configuration - Get credentials from user if needed - Add to MCP settings: - For TypeScript projects: ```json { "mcpServers": { "my-server": { "command": "node", "args": ["path/to/build/index.js"], "env": { "API_KEY": "key" }, "disabled": false, "autoApprove": [] } } } ``` - For Python projects: ```bash # Directly with command line mcp install server.py -v API_KEY=key # Or in settings.json { "mcpServers": { "my-server": { "command": "python", "args": ["server.py"], "env": { "API_KEY": "key" }, "disabled": false, "autoApprove": [] } } } ``` ## Step 3: Testing (BLOCKER ⛔️) <thinking> BEFORE using attempt_completion, I MUST verify: □ Have I tested EVERY tool? □ Have I confirmed success from the user for each test? □ Have I documented the test results? If ANY answer is "no", I MUST NOT use attempt_completion. </thinking> 1. Test Each Tool (REQUIRED) □ Test each tool with valid inputs □ Verify output format is correct ⚠️ DO NOT PROCEED UNTIL ALL TOOLS TESTED ## Step 4: Completion ❗ STOP AND VERIFY: □ Every tool has been tested with valid inputs □ Output format is correct for each tool Only after ALL tools have been tested can attempt_completion be used. ## Key Requirements - ✓ Must use MCP SDK - ✓ Must have comprehensive logging - ✓ Must test each tool individually - ✓ Must handle errors gracefully - ⛔️ NEVER skip testing before completion When this `.clinerules` file is present in your working directory, Cline will: 1. Start in **PLAN MODE** to design your server before implementation 2. Enforce proper implementation patterns in **ACT MODE** 3. Require testing of all tools before allowing completion 4. Guide you through the entire development lifecycle Creating an MCP server requires just a few simple steps to get started: First, add a `.clinerules` file to the root of your MCP working directory using the protocol above. This file configures Cline to use the MCP development protocol when working in this folder. Begin your Cline chat by clearly describing what you want to build. Be specific about: * The purpose of your MCP server * Which API or service you want to integrate with * Any specific tools or features you need For example: I want to build an MCP server for the AlphaAdvantage financial API. It should allow me to get real-time stock data, perform technical analysis, and retrieve company financial information. Cline will automatically start in PLAN MODE, guiding you through the planning process: * Discussing the problem scope * Reviewing API documentation * Planning authentication methods * Designing tool interfaces When ready, switch to ACT MODE using the toggle at the bottom of the chat to begin implementation. One of the most effective ways to help Cline build your MCP server is to share official API documentation right at the start: Here's the API documentation for the service: [Paste API documentation here] Providing comprehensive API details (endpoints, authentication, data structures) significantly improves Cline's ability to implement an effective MCP server. In this collaborative phase, you work with Cline to design your MCP server: * Define the problem scope * Choose appropriate APIs * Plan authentication methods * Design the tool interfaces * Determine data formats Once planning is complete, Cline helps implement the server: * Set up the project structure * Write the implementation code * Configure settings * Test each component thoroughly * Finalize documentation Let's walk through the development process of our AlphaAdvantage MCP server, which provides stock data analysis and reporting capabilities. During the planning phase, we: 1. **Defined the problem**: Users need access to financial data, stock analysis, and market insights directly through their AI assistant 2. **Selected the API**: AlphaAdvantage API for financial market data * Standard API key authentication * Rate limits of 5 requests per minute (free tier) * Various endpoints for different financial data types 3. **Designed the tools needed**: * Stock overview information (current price, company details) * Technical analysis with indicators (RSI, MACD, etc.) * Fundamental analysis (financial statements, ratios) * Earnings report data * News and sentiment analysis 4. **Planned data formatting**: * Clean, well-formatted markdown output * Tables for structured data * Visual indicators (↑/↓) for trends * Proper formatting of financial numbers We began by bootstrapping the project: npx @modelcontextprotocol/create-server alphaadvantage-mcp cd alphaadvantage-mcp npm install axios node-cache Next, we structured our project with: src/ ├── api/ │ └── alphaAdvantageClient.ts # API client with rate limiting & caching ├── formatters/ │ └── markdownFormatter.ts # Output formatters for clean markdown └── index.ts # Main MCP server implementation **API Client Implementation** The API client implementation included: * **Rate limiting**: Enforcing the 5 requests per minute limit * **Caching**: Reducing API calls with strategic caching * **Error handling**: Robust error detection and reporting * **Typed interfaces**: Clear TypeScript types for all data Key implementation details: /** * Manage rate limiting based on free tier (5 calls per minute) */ private async enforceRateLimit() { if (this.requestsThisMinute >= 5) { console.error("[Rate Limit] Rate limit reached. Waiting for next minute..."); return new Promise<void>((resolve) => { const remainingMs = 60 * 1000 - (Date.now() % (60 * 1000)); setTimeout(resolve, remainingMs + 100); // Add 100ms buffer }); } this.requestsThisMinute++; return Promise.resolve(); } **Markdown Formatting** We implemented formatters to display financial data beautifully: /** * Format company overview into markdown */ export function formatStockOverview(overviewData: any, quoteData: any): string { // Extract data const overview = overviewData; const quote = quoteData["Global Quote"]; // Calculate price change const currentPrice = parseFloat(quote["05. price"] || "0"); const priceChange = parseFloat(quote["09. change"] || "0"); const changePercent = parseFloat(quote["10. change percent"]?.replace("%", "") || "0"); // Format markdown let markdown = `# ${overview.Symbol} (${overview.Name}) - ${formatCurrency(currentPrice)} ${addTrendIndicator(priceChange)}${changePercent > 0 ? '+' : ''}${changePercent.toFixed(2)}%\n\n`; // Add more details... return markdown; } **Tool Implementation** We defined five tools with clear interfaces: server.setRequestHandler(ListToolsRequestSchema, async () => { console.error("[Setup] Listing available tools"); return { tools: [ { name: "get_stock_overview", description: "Get basic company info and current quote for a stock symbol", inputSchema: { type: "object", properties: { symbol: { type: "string", description: "Stock symbol (e.g., 'AAPL')" }, market: { type: "string", description: "Optional market (e.g., 'US')", default: "US" } }, required: ["symbol"] } }, // Additional tools defined here... ] }; }); Each tool's handler included: * Input validation * API client calls with error handling * Markdown formatting of responses * Comprehensive logging This critical phase involved systematically testing each tool: 1. First, we configured the MCP server in the settings: { "mcpServers": { "alphaadvantage-mcp": { "command": "node", "args": [ "/path/to/alphaadvantage-mcp/build/index.js" ], "env": { "ALPHAVANTAGE_API_KEY": "YOUR_API_KEY" }, "disabled": false, "autoApprove": [] } } } 1. Then we tested each tool individually: * **get\_stock\_overview**: Retrieved AAPL stock overview information # AAPL (Apple Inc) - $241.84 ↑+1.91% **Sector:** TECHNOLOGY **Industry:** ELECTRONIC COMPUTERS **Market Cap:** 3.63T **P/E Ratio:** 38.26 ... * **get\_technical\_analysis**: Obtained price action and RSI data # Technical Analysis: AAPL ## Daily Price Action Current Price: $241.84 (↑$4.54, +1.91%) ### Recent Daily Prices | Date | Open | High | Low | Close | Volume | |------|------|------|-----|-------|--------| | 2025-02-28 | $236.95 | $242.09 | $230.20 | $241.84 | 56.83M | ... * **get\_earnings\_report**: Retrieved MSFT earnings history and formatted report # Earnings Report: MSFT (Microsoft Corporation) **Sector:** TECHNOLOGY **Industry:** SERVICES-PREPACKAGED SOFTWARE **Current EPS:** $12.43 ## Recent Quarterly Earnings | Quarter | Date | EPS Estimate | EPS Actual | Surprise % | |---------|------|-------------|------------|------------| | 2024-12-31 | 2025-01-29 | $3.11 | $3.23 | ↑4.01% | ... During development, we encountered several challenges: 1. **API Rate Limiting**: * **Challenge**: Free tier limited to 5 calls per minute * **Solution**: Implemented queuing, enforced rate limits, and added comprehensive caching 2. **Data Formatting**: * **Challenge**: Raw API data not user-friendly * **Solution**: Created formatting utilities for consistent display of financial data 3. **Timeout Issues**: * **Challenge**: Complex tools making multiple API calls could timeout * **Solution**: Suggested breaking complex tools into smaller pieces, optimizing caching Our AlphaAdvantage implementation taught us several key lessons: 1. **Plan for API Limits**: Understand and design around API rate limits from the beginning 2. **Cache Strategically**: Identify high-value caching opportunities to improve performance 3. **Format for Readability**: Invest in good data formatting for improved user experience 4. **Test Every Path**: Test all tools individually before completion 5. **Handle API Complexity**: For APIs requiring multiple calls, design tools with simpler scopes Effective logging is essential for debugging MCP servers: // Start-up logging console.error('[Setup] Initializing AlphaAdvantage MCP server...'); // API request logging console.error(`[API] Getting stock overview for ${symbol}`); // Error handling with context console.error(`[Error] Tool execution failed: ${error.message}`); // Cache operations console.error(`[Cache] Using cached data for: ${cacheKey}`); Type definitions prevent errors and improve maintainability: export interface AlphaAdvantageConfig { apiKey: string; cacheTTL?: Partial<typeof DEFAULT_CACHE_TTL>; baseURL?: string; } /** * Validate that a stock symbol is provided and looks valid */ function validateSymbol(symbol: unknown): asserts symbol is string { if (typeof symbol !== "string" || symbol.trim() === "") { throw new McpError(ErrorCode.InvalidParams, "A valid stock symbol is required"); } // Basic symbol validation (letters, numbers, dots) const symbolRegex = /^[A-Za-z0-9.]+$/; if (!symbolRegex.test(symbol)) { throw new McpError(ErrorCode.InvalidParams, `Invalid stock symbol: ${symbol}`); } } Reduce API calls and improve performance: // Default cache TTL in seconds const DEFAULT_CACHE_TTL = { STOCK_OVERVIEW: 60 * 60, // 1 hour TECHNICAL_ANALYSIS: 60 * 30, // 30 minutes FUNDAMENTAL_ANALYSIS: 60 * 60 * 24, // 24 hours EARNINGS_REPORT: 60 * 60 * 24, // 24 hours NEWS: 60 * 15, // 15 minutes }; // Check cache first const cachedData = this.cache.get<T>(cacheKey); if (cachedData) { console.error(`[Cache] Using cached data for: ${cacheKey}`); return cachedData; } // Cache successful responses this.cache.set(cacheKey, response.data, cacheTTL); Implement robust error handling that maintains a good user experience: try { switch (request.params.name) { case "get_stock_overview": { // Implementation... } // Other cases... default: throw new McpError(ErrorCode.MethodNotFound, `Unknown tool: ${request.params.name}`); } } catch (error) { console.error(`[Error] Tool execution failed: ${error instanceof Error ? error.message : String(error)}`); if (error instanceof McpError) { throw error; } return { content: [{ type: "text", text: `Error: ${error instanceof Error ? error.message : String(error)}` }], isError: true }; } Resources let your MCP servers expose data to Cline without executing code. They're perfect for providing context like files, API responses, or database records that Cline can reference during conversations. 1. **Define the resources** your server will expose: server.setRequestHandler(ListResourcesRequestSchema, async () => { return { resources: [ { uri: "file:///project/readme.md", name: "Project README", mimeType: "text/markdown" } ] }; }); 1. **Implement read handlers** to deliver the content: server.setRequestHandler(ReadResourceRequestSchema, async (request) => { if (request.params.uri === "file:///project/readme.md") { const content = await fs.promises.readFile("/path/to/readme.md", "utf-8"); return { contents: [{ uri: request.params.uri, mimeType: "text/markdown", text: content }] }; } throw new Error("Resource not found"); }); Resources make your MCP servers more context-aware, allowing Cline to access specific information without requiring you to copy/paste. For more information, refer to the official documentation. **Challenge**: APIs often have different authentication methods. **Solution**: * For API keys, use environment variables in the MCP configuration * For OAuth, create a separate script to obtain refresh tokens * Store sensitive tokens securely // Authenticate using API key from environment const API_KEY = process.env.ALPHAVANTAGE_API_KEY; if (!API_KEY) { console.error("[Error] Missing ALPHAVANTAGE_API_KEY environment variable"); process.exit(1); } // Initialize API client const apiClient = new AlphaAdvantageClient({ apiKey: API_KEY }); **Challenge**: APIs may not provide all the functionality you need. **Solution**: * Implement fallbacks using available endpoints * Create simulated functionality where necessary * Transform API data to match your needs **Challenge**: Most APIs have rate limits that can cause failures. **Solution**: * Implement proper rate limiting * Add intelligent caching * Provide graceful degradation * Add transparent errors about rate limits if (this.requestsThisMinute >= 5) { console.error("[Rate Limit] Rate limit reached. Waiting for next minute..."); return new Promise<void>((resolve) => { const remainingMs = 60 * 1000 - (Date.now() % (60 * 1000)); setTimeout(resolve, remainingMs + 100); // Add 100ms buffer }); } Last updated 8 days ago   --- ## Page: https://docs.cline.bot/custom-model-configs/aws-bedrock * **AWS Bedrock:** A fully managed service that offers access to leading generative AI models (e.g., Anthropic Claude, Amazon Titan) through AWS. . * **Cline:** A VS Code extension that acts as a coding assistant by integrating with AI models—empowering developers to generate code, debug, and analyze data. * **Enterprise Focus:** This guide is tailored for organizations with established AWS environments (using IAM roles, AWS SSO, AWS Organizations, etc.) to ensure secure and compliant usage. * * * 1. **Sign in to the AWS Management Console:** 2. **Access IAM:** * Search for **IAM (Identity and Access Management)** in the AWS Console. * Either create a new IAM user or use your enterprise’s AWS SSO to assume a dedicated role for Bedrock access. 1. **Attach the Managed Policy:** * Attach the `**AmazonBedrockFullAccess**` managed policy to your user/role. 2. **Confirm Additional Permissions:** * Ensure your policy includes permissions for model invocation (e.g., `bedrock:InvokeModel` and `bedrock:InvokeModelWithResponseStream`), model listing, and AWS Marketplace actions (like `aws-marketplace:Subscribe`). * _Enterprise Tip:_ Apply least-privilege practices by scoping resource ARNs and using to restrict access where necessary. * * * 2. **Verify Model Access:** * In the AWS Bedrock console, confirm that the models your team requires (e.g., Anthropic Claude, Amazon Titan) are marked as “Access granted.” 1. **Subscribe to Third-Party Models:** * Navigate to the AWS Bedrock console and locate the model subscription section. * For models from third-party providers (e.g., Anthropic), accept the terms to subscribe. 2. **Enterprise Tip:** * Model subscriptions are often managed centrally. Confirm with your cloud team if a standard subscription process is in place. * * * 2. **Install the Cline Extension:** * Open VS Code. * Go to the Extensions Marketplace (`Ctrl+Shift+X` or `Cmd+Shift+X`). * Search for **Cline** and install it. 1. **Open Cline Settings:** * Click on the settings ⚙️ to select your API Provider. 2. **Select AWS Bedrock as the API Provider:** * From the API Provider dropdown, choose **AWS Bedrock**. 3. **Enter Your AWS Credentials:** * Input your **Access Key** and **Secret Key** (or use temporary credentials if using AWS SSO). * Specify the correct **AWS Region** (e.g., `us-east-1` or your enterprise-approved region). 4. **Select a Model:** * Choose an on-demand model (e.g., **anthropic.claude-3-5-sonnet-20241022-v2:0**). 5. **Save and Test:** * Click **Done/Save** to apply your settings. * Test the integration by sending a simple prompt (e.g., “Generate a Python function to check if a number is prime.”). * * * 1. **Secure Access:** * Prefer AWS SSO/federated roles over long-lived IAM credentials. 2. **Enhance Network Security:** 3. **Monitor and Log Activity:** * Enable AWS CloudTrail to log Bedrock API calls. * Use CloudWatch to monitor metrics like invocation count, latency, and token usage. * Set up alerts for abnormal activity. 4. **Handle Errors and Manage Costs:** * Implement exponential backoff for throttling errors. 5. **Regular Audits and Compliance:** * Periodically review IAM roles and CloudTrail logs. * Follow internal data privacy and governance policies. * * * By following these steps, your enterprise team can securely integrate AWS Bedrock with the Cline VS Code extension to accelerate development: 1. **Prepare Your AWS Environment:** Create or use a secure IAM role/user, attach the `AmazonBedrockFullAccess` policy, and ensure necessary permissions. 2. **Verify Region and Model Access:** Confirm that your selected region supports your required models and subscribe via AWS Marketplace if needed. 3. **Configure Cline in VS Code:** Install and set up Cline with your AWS credentials and choose an appropriate model. 4. **Implement Security and Monitoring:** Use best practices for IAM, network security, monitoring, and cost management. * * * _This guide will be updated as AWS Bedrock and Cline evolve. Always refer to the latest documentation and internal policies for up-to-date practices._ **Select a Region:** AWS Bedrock is available in multiple regions (e.g., US East, Europe, Asia Pacific). Choose the region that meets your latency and compliance needs. **Note:** Some advanced models might require an if not available on-demand. **Install VS Code:** Download from the . Consider setting up to securely connect to Bedrock. Use AWS Cost Explorer and set billing alerts to track usage. For further details, consult the and coordinate with your internal cloud team. Happy coding! --- ## Page: https://docs.cline.bot/custom-model-configs/aws-bedrock-w-profile-authentication * * * 1. Install the of AWS CLI * Follow the AWS docs to install your OS-specific version of AWS CLI 2. with the AWS CLI * If you do not already have AWS access through the IAM Identity Center, follow the to set up IAM users and roles. Ensure you have a `PowerUserAccess` role. * If you have access to AWS through your employer, open your AWS access portal and find the appropriate account. Ensure you have `PowerUserAccess`permissions. * Open the `Access keys` link and note the `SSO start URL` and `SSO region`, which are needed in the next step 3. Continue configuring your profile using * Once configured, use the following command to authenticate the AWS CLI: \``aws sso login --profile <AWS-profile-name>` * Note which profile name you attach to your AWS account, this is needed to configure Cline in the following steps 4. If you haven't already done so, install VSCode and the Cline extension. Consult the Getting Started page for guidance. 5. Open the Cline extension, then click on the settings button ⚙️ to select your API Provider. * From the API Provider dropdown, select AWS Bedrock * Select the AWS Profile radio button, then enter the AWS Profile Name from step 3 * Select your AWS Region from the dropdown menu * Selecting the cross-region inference checkbox is required for some models  --- ## Page: https://docs.cline.bot/custom-model-configs/gcp-vertex-ai **Overview** **GCP Vertex AI:** A fully managed service that provides access to leading generative AI models—such as Anthropic’s Claude 3.5 Sonnet v2—through Google Cloud. . This guide is tailored for organizations with established GCP environments (leveraging IAM roles, service accounts, and best practices in resource management) to ensure secure and compliant usage. * * * * **Sign in to the GCP Console:** . * **Select or Create a Project:** Use an existing project or create a new one dedicated to Vertex AI. _(Screenshot suggestion: Project selection/creation screen in the GCP Console)_ * **Assign Required Roles:** * Grant your user (or service account) the **Vertex AI User** role (`roles/aiplatform.user`). * For service accounts, also attach the **Vertex AI Service Agent** role (`roles/aiplatform.serviceAgent`) to enable certain operations. * Consider additional predefined roles as needed: * Vertex AI Platform Express Admin * Vertex AI Platform Express User * Vertex AI Migration Service User _(Screenshot suggestion: IAM console showing role assignments)_ * **Cross-Project Resource Access:** * For BigQuery tables in different projects, assign the **BigQuery Data Viewer** role. * For Cloud Storage buckets in different projects, assign the **Storage Object Viewer** role. * For external data sources, refer to the . * * * Vertex AI supports eight regions. Select a region that meets your latency, compliance, and capacity needs. Examples include: * **us-east5 (Columbus, Ohio)** * **us-east1 (South Carolina)** * **us-east4 (Northern Virginia)** * **us-central1 (Iowa)** * **us-west1 (The Dalles, Oregon)** * **us-west4 (Las Vegas, Nevada)** * **europe-west1 (Belgium)** * **asia-southeast1 (Singapore)** _(Screenshot suggestion: List or map of supported regions in the Vertex AI dashboard)_ * **Open Vertex AI Model Garden:** In the Cloud Console, navigate to **Vertex AI → Model Garden**. * **Enable Claude 3.5 Sonnet v2:** Locate the model card for Claude 3.5 Sonnet v2 and click **Enable**. _(Screenshot suggestion: Model Garden showing the Claude 3.5 Sonnet v2 model card with the Enable button)_ * * * * **Install the Cline Extension:** * Open VS Code. * Navigate to the Extensions Marketplace (Ctrl+Shift+X or Cmd+Shift+X). * Search for **Cline** and install the extension. * **Open Cline Settings:** Click the settings ⚙️ icon within the Cline extension. * **Set API Provider:** Choose **GCP Vertex AI** from the API Provider dropdown. * **Enter Your Google Cloud Project ID:** Provide the project ID you set up earlier. * **Select the Region:** Choose one of the supported regions (e.g., `us-east5`). * **Select the Model:** From the available list, choose **Claude 3.5 Sonnet v2**. * **Save and Test:** Save your settings and test by sending a simple prompt (e.g., “Generate a Python function to check if a number is prime.”). _(Screenshot suggestion: Cline settings showing project ID, region, and model selection)_ * * * 2. **Initialize and Authenticate:** gcloud init gcloud auth application-default login * This sets up Application Default Credentials (ADC) using your Google account. _(Screenshot suggestion: Terminal output for successful_ `_gcloud auth application-default login_`_)_ 1. **Restart VS Code:** Ensure VS Code is restarted so that the Cline extension picks up the new credentials. 1. **Create a Service Account:** * In the GCP Console, navigate to **IAM & Admin > Service Accounts**. * Create a new service account (e.g., “vertex-ai-client”). 2. **Assign Roles:** * Attach **Vertex AI User** (`roles/aiplatform.user`). * Attach **Vertex AI Service Agent** (`roles/aiplatform.serviceAgent`). * Optionally, add other roles as required. _(Screenshot suggestion: Creating a service account with role assignments)_ 3. **Generate a JSON Key:** * In the Service Accounts section, manage keys for your service account and download the JSON key. 4. **Set the Environment Variable:** export GOOGLE_APPLICATION_CREDENTIALS="/path/to/your/service-account-key.json" * This instructs Google Cloud client libraries (and Cline) to use this key. _(Screenshot suggestion: Terminal showing the export command)_ 5. **Restart VS Code:** Launch VS Code from a terminal where the `GOOGLE_APPLICATION_CREDENTIALS` variable is set. * * * * **Principle of Least Privilege:** Only grant the minimum necessary permissions. Custom roles can offer finer control compared to broad predefined roles. * **Project vs. Resource-Level Access:** Access can be managed at both levels. Note that resource-level permissions (e.g., for BigQuery or Cloud Storage) add to, but do not override, project-level policies. * **Model Observability Dashboard:** * In the Vertex AI Console, navigate to the **Model Observability** dashboard. * Monitor metrics such as request throughput, latency, and error rates (including 429 quota errors). _(Screenshot suggestion: Model Observability dashboard with error metrics highlighted)_ * **Quota Management:** * If you encounter 429 errors, check the **IAM & Admin > Quotas** page. * **Service Agents:** Be aware of the different service agents: * Vertex AI Service Agent * Vertex AI RAG Data Service Agent * Vertex AI Custom Code Service Agent * Vertex AI Extension Service Agent * **Cross-Project Access:** For resources in other projects (e.g., BigQuery, Cloud Storage), ensure that the appropriate roles (BigQuery Data Viewer, Storage Object Viewer) are assigned. * * * By following these steps, your enterprise team can securely integrate GCP Vertex AI with the Cline VS Code extension to harness the power of **Claude 3.5 Sonnet v2**: * **Prepare Your GCP Environment:** Create or use a project, configure IAM with least privilege, and ensure necessary roles (including the Vertex AI Service Agent role) are attached. * **Verify Regional and Model Access:** Confirm that your chosen region supports Claude 3.5 Sonnet v2 and that the model is enabled. * **Configure Cline in VS Code:** Install Cline, enter your project ID, select the appropriate region, and choose the model. * **Set Up Authentication:** Use either user credentials (via `gcloud auth application-default login`) or a service account with a JSON key. * **Implement Security and Monitoring:** Adhere to best practices for IAM, manage resource access carefully, and monitor usage with the Model Observability dashboard. _This guide will be updated as GCP Vertex AI and Cline evolve. Always refer to the latest documentation for current practices._ **Download VS Code:** . **Install the Google Cloud CLI:** Follow the . **Best Practices:** Refer to . Request a quota increase if necessary. . For further details, please consult the and your internal security policies. Happy coding!  --- ## Page: https://docs.cline.bot/custom-model-configs/litellm-and-cline-using-codestral This guide demonstrates how to run a demo for LiteLLM starting with the Codestral model for use with Cline. * installed to run the LiteLLM image locally * For this example config: A Codestral API Key (different from the Mistral API Keys) 1. **Create a** `**.env**` **file and fill in the appropriate field** # Tip: Use the following command to generate a random alphanumeric key: # openssl rand -base64 32 | tr -dc 'A-Za-z0-9' | head -c 32 LITELLM_MASTER_KEY=YOUR_LITELLM_MASTER_KEY CODESTRAL_API_KEY=YOUR_CODESTRAL_API_KEY _Note: Although this is limited to localhost, it's a good practice set LITELLM\_MASTER\_KEY to something secure_ 1. **Configuration** We'll need to create a `config.yaml` file to contain our LiteLLM configuration. In this case we'll just have one model, 'codestral-latest' and label it 'codestral' model_list: - model_name: codestral litellm_params: model: codestral/codestral-latest api_key: os.environ/CODESTRAL_API_KEY 1. **Startup the LiteLLM docker container** docker run \ --env-file .env \ -v $(pwd)/config.yaml:/app/config.yaml \ -p 127.0.0.1:4000:4000 \ ghcr.io/berriai/litellm:main-latest \ --config /app/config.yaml --detailed_debug 1. **Setup Cline** Once the LiteLLM server is up and running you can set it up in Cline: * Base URL should be `http://0.0.0.0:4000/v1` * API Key should be the one you set in `.env` for LITELLM\_MASTER\_KEY * Model ID is `codestral` or whatever you named it under `config.yaml` Author: mdp --- ## Page: https://docs.cline.bot/running-models-locally/read-me-first Cline is a powerful AI coding assistant that uses tool-calling to help you write, analyze, and modify code. While running models locally can save on API costs, there's an important trade-off: local models are significantly less reliable at using these essential tools. When you run a "local version" of a model, you're actually running a drastically simplified copy of the original. This process, called distillation, is like trying to compress a professional chef's knowledge into a basic cookbook – you keep the simple recipes but lose the complex techniques and intuition. Local models are created by training a smaller model to imitate a larger one, but they typically only retain 1-26% of the original model's capacity. This massive reduction means: * Less ability to understand complex contexts * Reduced capability for multi-step reasoning * Limited tool-use abilities * Simplified decision-making process Think of it like running your development environment on a calculator instead of a computer – it might handle basic tasks, but complex operations become unreliable or impossible. When you run a local model with Cline: * Responses are 5-10x slower than cloud services * System resources (CPU, GPU, RAM) get heavily utilized * Your computer may become less responsive for other tasks * Code analysis becomes less accurate * File operations may be unreliable * Browser automation capabilities are reduced * Terminal commands might fail more often * Complex multi-step tasks often break down You'll need at minimum: * Modern GPU with 8GB+ VRAM (RTX 3070 or better) * 32GB+ system RAM * Fast SSD storage * Good cooling solution Even with this hardware, you'll be running smaller, less capable versions of models: 7B models Basic coding, limited tool use 14B models Better coding, unstable tool use 32B models Good coding, inconsistent tool use 70B models Best local performance, but requires expensive hardware Put simply, the cloud (API) versions of these models are the full-bore version of the model. The full version of DeepSeek-R1 is 671B. These distilled models are essentially "watered-down" versions of the cloud model. 1. Use cloud models for: * Complex development tasks * When tool reliability is crucial * Multi-step operations * Critical code changes 2. Use local models for: * Simple code completion * Basic documentation * When privacy is paramount * Learning and experimentation * Start with smaller models * Keep tasks simple and focused * Save work frequently * Be prepared to switch to cloud models for complex operations * Monitor system resources * **"Tool execution failed":** Local models often struggle with complex tool chains. Simplify your prompt. * **"No connection could be made because the target machine actively refused it":** This usually means that the Ollama or LM Studio server isn't running, or is running on a different port/address than Cline is configured to use. Double-check the Base URL address in your API Provider settings. * **Slow or incomplete responses:** Local models can be slower than cloud-based models, especially on less powerful hardware. If performance is an issue, try using a smaller model. Expect significantly longer processing times. * **System stability:** Watch for high GPU/CPU usage and temperature * **Context limitations:** Local models often have smaller context windows than cloud models. Break tasks down into smaller pieces. Local model capabilities are improving, but they're not yet a complete replacement for cloud services, especially for Cline's tool-based functionality. Consider your specific needs and hardware capabilities carefully before committing to a local-only approach. * Check the latest compatibility guides * Share your experiences with other developers Remember: When in doubt, prioritize reliability over cost savings for important development work. Join our community and  --- ## Page: https://docs.cline.bot/running-models-locally/ollama 1. Running Models Locally ## Ollama A quick guide to setting up Ollama for local AI model execution with Cline. Last updated 4 days ago * Windows, macOS, or Linux computer * Cline installed in VS Code * Visit * Download and install for your operating system * Select model and copy command: ollama run [model-name] * Open your Terminal and run the command: * Example: ollama run llama2 **✨ Your model is now ready to use within Cline!** 1. Open VS Code 2. Click Cline settings icon 3. Select "Ollama" as API provider 4. Enter configuration: * Base URL: `http://localhost:11434/` (default value, can be left as is) * Select the model from your available options * Start Ollama before using with Cline * Keep Ollama running in background * First model download may take several minutes If Cline can't connect to Ollama: 1. Verify Ollama is running 2. Check base URL is correct 3. Ensure model is downloaded Browse models at Need more info? Read the .     --- ## Page: https://docs.cline.bot/running-models-locally/lm-studio 1. Running Models Locally ## LM Studio A quick guide to setting up LM Studio for local AI model execution with Cline. Last updated 2 months ago Run AI models locally using LM Studio with Cline. * Windows, macOS, or Linux computer with AVX2 support * Cline installed in VS Code * Visit * Download and install for your operating system * Open the installed application * You'll see four tabs on the left: **Chat**, **Developer** (where you will start the server), **My Models** (where your downloaded models are stored), **Discover** (add new models) * Browse the "Discover" page * Select and download your preferred model * Wait for download to complete * Navigate to the "Developer" tab * Toggle the server switch to "Running" * Note: The server will run at `http://localhost:1234` 1. Open VS Code 2. Click Cline settings icon 3. Select "LM Studio" as API provider 4. Select your model from the available options * Start LM Studio before using with Cline * Keep LM Studio running in background * First model download may take several minutes depending on size * Models are stored locally after download 1. If Cline can't connect to LM Studio: 2. Verify LM Studio server is running (check Developer tab) 3. Ensure a model is loaded 4. Check your system meets hardware requirements      --- ## Page: https://docs.cline.bot/more-info/telemetry To help make Cline better for everyone, we collect anonymous usage data that helps us understand how developers are using our open-source AI coding agent. This feedback loop is crucial for improving Cline's capabilities and user experience. We use PostHog, an open-source analytics platform, for data collection and analysis. Our telemetry implementation is fully transparent - you can review the to see exactly what we track. Privacy is our priority. All collected data is anonymized before being sent to PostHog, with no personally identifiable information (PII) included. Your code, prompts, and conversation content always remain private and are never collected. We collect basic anonymous usage data including: **Task Interactions:** When tasks start and finish, conversation flow (without content) **Mode and Tool Usage:** Switches between plan/act modes, which tools are being used **Token Usage:** Basic metrics about conversation length to estimate cost (not the actual content of the tokens) **System Context:** OS type and VS Code environment details **UI Activity:** Navigation patterns and feature usage For complete transparency, you can inspect our to see the exact events we track. Telemetry in Cline is entirely optional and requires your explicit consent: * When you update or install our VS Code extension, you'll see a simple prompt: "Help Improve Cline" with Allow or Deny options * You can change your preference anytime in settings Cline also respects VS Code's global telemetry settings. If you've disabled telemetry at the VS Code level, Cline's telemetry will automatically be disabled as well. --- ## Page: https://docs.cline.bot/getting-started  Home Home --- ## Page: https://docs.cline.bot/getting-started/getting-started-new-coders/installing-dev-essentials  Home Home --- ## Page: https://docs.cline.bot/improving-your-prompting-skills/custom-instructions-library/cline-memory-bank  Home Home --- ## Page: https://docs.cline.bot/improving-your-prompting-skills  Home Home --- ## Page: https://docs.cline.bot/exploring-clines-tools  Home Home --- ## Page: https://docs.cline.bot/enterprise-solutions  Home Home --- ## Page: https://docs.cline.bot/mcp-servers  Home Home --- ## Page: https://docs.cline.bot/custom-model-configs  Home Home --- ## Page: https://docs.cline.bot/running-models-locally  Home Home --- ## Page: https://docs.cline.bot/more-info  Home Home