# Execution Model

Parallel AI utilizes a sophisticated execution model to manage and execute the parallelized code.The execution model is based around 3 key processes:

1. **Scheduling:** The system schedules the execution of parallel tasks across the available GPUs and CPUs. This scheduling is dynamic and takes into account the current load and availability of resources to avoid bottlenecks and ensure smooth execution.
2. **Runtime Environment:** A specialized runtime environment manages the execution of code, handling task coordination, error handling, and the provision of necessary runtime data. This environment ensures that each task has access to the required resources and that the results of tasks are correctly integrated.
3. **Concurrency Control:** Mechanisms to control concurrency, such as locks, semaphores, or other synchronization techniques, are employed to prevent race conditions and ensure data integrity.

The code is now ready for execution via Parallel AI’s aggregated network of decentralized GPUs and CPUs.

<br>


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://parallel-ai.gitbook.io/parallel-ai/technology/execution-model.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
