The Ultimate Guide to Build Your AI Copilot

Establish potent use cases and create your AI Copilot with our step-by-step guide.

Alexandre Airvault

The Ultimate Guide to Build Your AI Copilot

Establish potent use cases and create your AI Copilot with our step-by-step guide.

Alexandre Airvault

Is Your API AI-ready? Our Guidelines and Best Practices

You now have a list of use cases and a clearer idea of what you’re aiming for, so it’s time to start looking at the resources needed to deliver them.

If you’re about to start building an AI Copilot, you will need to connect the LLM to your API so that it can access live data and execute tasks.

However, not all APIs can integrate AI straight away. Your API may require adjusting for it to interact successfully with the LLM.

For starters, you will need an OpenAPI specification file. If your API is not a REST API, but a GraphQL, for example, limited solutions do exist for connecting it to an LLM (like this LangChain wrapper).

Since the OpenAPI specification is the only API format natively supported by OpenAI models, it is the only one developed here.

The main challenge when using your API to build a copilot is to ensure crystal-clear communication on when and how to use the endpoints. Delivering efficient and comprehensive API documentation for humans is hard enough, so you can imagine what it’s like for a machine whose only frame of reference is the OpenAPI file!

The challenges facing humans and machines are not in fact dissimilar. Aspects of API you struggle with will resonate with those of the machine, such as:

  • Providing clear descriptions of the endpoints and parameters to understand how they work,
  • Thorough error descriptions to understand why your call failed, and how to make the next work.

At Blobr, we have reviewed dozens of API specs, tried them with the best LLMs, and determined a set of necessary criteria to make any API run smoothly with LLMs. Some improve with only a few alterations to the OpenAPI file whereas others require reworking your API.

Without further ado, here’s what to look for.

Keys to an AI-ready API

We can break down all the identified criteria into the following two categories:

  • How the information is displayed, or how the LLM will understand what the API can deliver.
  • How the data is treated and processed, or the way the LLM will use the data provided by the API. This might require some additional API design.

The first step is to define the scope of your future AI feature. According to the kind of data you have on hand, you should be able to determine the level you need.

How the information is displayed

This is the easiest part, which can quickly be improved on the OpenAPI file. Here’s what to look for in your OpenAPI specification file.

Even if you connect the API documentation to an LLM through a vectorized database, it is still important to populate the spec file extensively in order to limit the risk of hallucinations. This is because the LLM can fail to reconcile info between the doc and the API correctly.

To illustrate best practices, we took samples from the PetStore API which is a pretty good example of an API where the information is displayed well.

Endpoints descriptions

Each endpoint should have a detailed and user-friendly description.

This description mustn’t be the same as the summary of the endpoint but should provide answers to questions, such as: is it better to use another endpoint before calling this one? Is there a rule that applies to the upcoming parameters?

You can also add enough details about the kind of data it provides to give the LLM some clue about the use cases for which the endpoint should be triggered.

For example, in the PetStore API, this description specifies that queries with several statuses should use commas to separate them. Without this information, the LLM would probably not have used anything or guessed at a separator.

Operation IDs

Documenting Operation IDs is another way to help the LLM identify which endpoint to leverage.

Though they are usually optional, at Blobr we noticed that when each endpoint was given a distinct operation ID, this was hugely helpful to the LLM in calling the right endpoint and delivering better results.

Don’t scratch your head too much over this one: in most cases, the endpoint’s concatenated name will be enough.

Parameters description and schema

Each parameter should have a clear description, which includes a comprehensive explanation of its purposes. The description must cover how to use it, the type of data input expected, and the format of this input.

Here’s an example from the Klarna API used for its ChatGPT plugin:

- name: q
         in: query
         description: >-
           A precise query that matches one very small category or product that
           needs to be searched for to find the products the user is looking
           for. If the user explicitly stated what they want, use that as a
           query. The query is as specific as possible to the product name or
           category mentioned by the user in its singular form, and don't
           contain any clarifiers like latest, newest, cheapest, budget,
           premium, expensive or similar. The query is always taken from the
           latest topic, if there is a new topic a new query is started. If the
           user speaks another language than English, translate their request
           into English (example: translate fia med knuff to ludo board game)!
         required: true

This parameter processes the user’s request and turns it into a search in the Klarna database in order to return the most accurate results.

As you can see, the description reads like a well-engineered prompt, built to identify and extract the product’s category.

The schema is another important point, and the OpenAPI convention offers a broad set of tools to define the request: for example, the Twilio API includes URLs containing further documentation, a pattern, and the minimum and maximum length when the expected input is an ID.

Example is another useful functionality available in OpenAPI 3.0: you can add a mock request to your parameter indicating what etc. what type of value is expected.

Lastly, don’t forget to indicate which parameters are required.

API call errors

The need for description doesn’t stop at the request; possible responses should also be thoroughly documented, including call errors.

In many APIs, errors form a sort of blind spot, which are omitted from the specification file and about which only vague information is provided in the API documentation. And yet, they are crucial tools for the LLM.

The first step is to create several errors covering the possible misuses of an endpoint. This helps the LLM understand what went wrong, and adjust the following call.

The second step is to describe the errors. Error descriptions are crucial. Without them, the LLM cannot help the user understand where it went wrong. With them, output from the LLM guides the user, specifying the reasons why the call failed, (not just the failure).

In our example, instead of one error code for a bad string, we made two codes to distinguish the type of error, between a string not found — imagine an “in stock” request instead of an “available”— and an invalid request, by using semicolons to separate two status.

How the data is treated and processed

The next set of actions concerns the design of the API, and how to improve the API to enhance the possible usages when connected to an LLM, as well as limit hallucinations.

This can take several forms, like implementing a pagination policy and limiting the payload, allowing the use of natural language to get records, adding a search endpoint to retrieve IDs, etc.

Always bear in mind that you must rework the API with a view to more granular access to your data, thereby making it easier for the LLM to digest.

Search endpoint

Many APIs are not designed to interact using natural language. Many will use IDs to retrieve a specific item, which is incompatible with the way in which you interact with an LLM.

For example, the HubSpot API has a Search endpoint enabling users to look for contacts, companies, etc. in natural language. In this instance, the LLM will make a call with this endpoint first, retrieve the ID, then make additional calls using the ID to get more information and answer the query.

Now let’s imagine a CRM API that doesn’t have a search endpoint: if you want to retrieve info about your prospect Jane Doe, you will probably have to find the ID of her contact item first. If you simply connect the API to an LLM and ask; “Please find info about Jane Doe”, you will receive an error message.

The alternative to the Search endpoint is to host your data on an SQL database. However, this solution implies that the data might not always be updated.

Numerous parameters and Filtering

Granularity is key when it comes to limiting hallucinations.

The heavier the data payload, the greater the likelihood of misinterpretations by LLMs. By adding parameters, the LLM’s range of possibilities for limiting the size of the payload is increased.

There are several ways to prevent the LLM from this type of behavior, and one of them is to offer multiple parameters.

The parameters should also include filtering options, by category, min-max range, and enable sorting.

If we take our PetStore example a step further, we can imagine additional parameters for our Status endpoint that would include:

  • A category parameter: a string with an enum to enter a type of pet,
  • Two minimum and maximum parameters to filter the pets by price,
  • A “sort by” parameter, alphabetical for the pet’s names, numerical for the prices, etc.
  • A limit per page, and a limit of pages, both with a default value.

Filtering can also be used to limit the size of the payload by adding a parameter enabling the LLM to select the properties to be returned with an array of strings.

For example, when you query users with the Mixpanel API, you can end up with several dozen properties in the responses. This type of filtering helps limit the size of the response, speeds up the call and, for the LLM, handles more data.

Pagination

Ideally, the size of the payload wouldn’t exceed 10 to 15 entries, depending on the size of those entries.

Pagination is the best way to break up the payload to deal with responses of more than 10 to 15 entries:

  • It allows efficient data requests and prevents overload from accessing all the data at once.
  • It enables the LLM to obtain appropriate training subsets.

You can set default limits to the number of entries or pages in the endpoint parameters, which is much better than letting the end-user or the LLM decide.

To decide the default limit, take a look at the size and complexity of the data. If one data entry is quite long with a lot of metadata, it is best to limit the number of entries per page to about 5.

Don’t forget to unify pagination naming and methods across the API.

Data clarity model

There are a few ways to optimize the clarity of the data model when designing an API for an LLM.

Consistency in Data Structures:
Ensure that the data structures used across different endpoints are consistent. This means using the same naming conventions, data types, and formats. This consistency enables the seamless passing of data from one endpoint to another.

Reusability with $ref:
Utilize $ref to create reusable definitions. This helps to ensure that the same data model is used across different endpoints, making it easier to pass data around without needing to reformat or reinterpret it.

Centralized Definitions:
Define common data structures in a centralized location (like in a components section in OpenAPI). This centralization ensures that all endpoints refer to the same definition, thereby reducing the risk of discrepancies.

Blobr’s API Checker grid

This grid helped us attribute an AI-compatibility index for any API.

You can check your score yourself

CRITERIA
DESCRIPTION
SCORE
Required Parameters documented
Required parameters always need to be properly documented.
15 points
Endpoint descriptions
Each endpoint or datapoint should have a detailed and user-friendly description.
15 points
Numerous and described parameters
Each endpoint should support a range of parameters, each with a clear description to enable efficient data filters. These parameters must come with comprehensive explanations of their purpose and the expected input data format. Data types, allowed values, and constraints increase the grade.
10 points
Data model clarity
The data model must be clear so it can be used to make several calls from different endpoints (passing information from one endpoint to another). In case there’s no data model, assign a grade of 0/10.
10 points
Analytics endpoints
An analytics endpoint should be accessible to address analytics questions.
10 points
Heaviness of data payload and answers
The data payload and API answers should be kept lightweight to prevent any potential issues of overloading or generating inaccurate information.
10 points
Use cases
The API's intended use cases should be self-evident, and its documentation should provide clear guidance on how to orchestrate various API calls to accomplish specific actions.
10 points
Operation IDs
Each operation of API endpoint should include operation IDs.
5 points
Authentication description
The API should support secure authentication mechanisms, such as API keys, OAuth tokens, or other methods, to ensure that the LLM can access the data it needs while maintaining security.
5 points
Pagination
Pagination of the data payload is essential, and it must be clearly outlined in the specifications. In particular, inside the pagination section, look for the presence of “Offset”, “Limit”, “Cursor”, “Page”, “start_time”, “end_time”, “last_key”.
5 points
API call errors
The API should provide clear and informative error messages for missing information in requests or bad requests, ensuring developers understand how to make a valid API call. Check for explanations of HTTP status codes.
5 points
Total
100 points
Next chapter: 3/3
Building your AI Copilot with OpenAI, LangChain, etc.
Coming soon

Ready to get actionable insights, 100x faster?

Try for free
Consent Preferences