openai

OpenAI specific module.

Hint

Use pip to install the necessary dependencies for this module: pip install mltb2[openai]

class mltb2.openai.OpenAiAzureChat(api_key: str, model: str, api_version: str, azure_endpoint: str)[source]

Bases: OpenAiChat

Tool to interact with Azure OpenAI chat models.

This can also be constructed with from_yaml().

Parameters:
  • api_key (str) – The OpenAI API key.

  • model (str) – The OpenAI model name.

  • api_version (str) – The OpenAI API version. A common value for this is 2023-05-15.

  • azure_endpoint (str) – The Azure endpoint.

class mltb2.openai.OpenAiChat(api_key: str, model: str)[source]

Bases: object

Tool to interact with OpenAI chat models.

This also be constructed with from_yaml().

See also

OpenAI API reference: Create chat completion

Parameters:
  • api_key (str) – The OpenAI API key.

  • model (str) – The OpenAI model name.

__call__(prompt: str | List[Dict[str, str]], completion_kwargs: Dict[str, Any] | None = None) OpenAiChatResult[source]

Create a model response for the given prompt (chat conversation).

Parameters:
  • prompt (str | List[Dict[str, str]]) – The prompt for the model.

  • completion_kwargs (Dict[str, Any] | None) –

    Keyword arguments for the OpenAI completion.

    • model can not be set via completion_kwargs! Please set the model in the initializer.

    • messages can not be set via completion_kwargs! Please set the prompt argument.

    Also see:

Returns:

The result of the OpenAI completion.

Return type:

OpenAiChatResult

classmethod from_yaml(yaml_file)[source]

Construct this class from a yaml file.

Parameters:

yaml_file – The yaml file.

Returns:

The constructed class.

class mltb2.openai.OpenAiChatResult(content: str | None = None, model: str | None = None, prompt_tokens: int | None = None, completion_tokens: int | None = None, total_tokens: int | None = None, finish_reason: str | None = None, completion_args: Dict[str, Any] | None = None)[source]

Bases: object

Result of an OpenAI chat completion.

If you want to convert this to a dict use asdict(open_ai_chat_result) from the dataclasses module.

See also

OpenAI API reference: The chat completion object

Parameters:
  • content (str | None) – the result of the OpenAI completion

  • model (str | None) – model name which has been used

  • prompt_tokens (int | None) – number of tokens of the prompt

  • completion_tokens (int | None) – number of tokens of the completion (content)

  • total_tokens (int | None) – number of total tokens (prompt_tokens + content_tokens)

  • finish_reason (str | None) –

    The reason why the completion stopped.

    • stop: Means the API returned the full completion without running into any token limit.

    • length: Means the API stopped the completion because of running into a token limit.

    • content_filter: When content was omitted due to a flag from the OpenAI content filters.

    • tool_calls: When the model called a tool.

    • function_call (deprecated): When the model called a function.

  • completion_args (Dict[str, Any] | None) –

    The arguments which have been used for the completion. Examples:

    • model: always set

    • max_tokens: only set if completion_kwargs contained max_tokens

    • temperature: only set if completion_kwargs contained temperature

    • top_p: only set if completion_kwargs contained top_p

classmethod from_chat_completion(chat_completion: ChatCompletion, completion_kwargs: Dict[str, Any] | None = None)[source]

Construct this class from an OpenAI ChatCompletion object.

Parameters:
  • chat_completion (ChatCompletion) – The OpenAI ChatCompletion object.

  • completion_kwargs (Dict[str, Any] | None) – The arguments which have been used for the completion.

Returns:

The constructed class.

class mltb2.openai.OpenAiTokenCounter(model_name: str, show_progress_bar: bool = False)[source]

Bases: object

Count OpenAI tokens.

Parameters:
  • model_name (str) –

    The OpenAI model name. Some examples:

    • gpt-4

    • gpt-3.5-turbo

    • text-davinci-003

    • text-embedding-ada-002

  • show_progress_bar (bool) – Show a progressbar during processing.

__call__(text: str | Iterable) int | List[int][source]

Count tokens for text.

Parameters:

text (str | Iterable) – The text for which the tokens are to be counted.

Returns:

The number of tokens if text was just a str. If text is an Iterable then a list of number of tokens.

Return type:

int | List[int]