Aleph Alpha

Introduction

Use with

from pyalm import AlephAlpha
llm = AlephAlpha("luminous-extended-control", aleph_alpha_key=KEY)

Alternatively the key can be ignored and set via the env var AA_TOKEN. You can set the used model to a non supported one or change it anytime via llm.model = NAME Cost can be accessed via llm.finish_meta after a call or with

AlephAlpha.pricing
AlephAlpha.pricing_factors
AlephAlpha.pricing_meta
AlephAlpha.pricing_img

Documentation

class pyalm.models.alephalpha.AlephAlpha(model_path_or_name, aleph_alpha_key=None, verbose=0, n_ctx=2048, **kwargs)
available_models = ['luminous-supreme', 'luminous-base', 'luminous-extended-control', 'luminous-base-control', 'luminous-supreme-control', 'luminous-extended']
build_prompt(preserve_flow=False)

Build prompt in format native to library

Parameters:

preserve_flow – Block suffix for purely text based models

Returns:

prompt obj

create_native_completion(text, max_tokens=256, stop=None, token_prob_delta=None, token_prob_abs=None, log_probs=None, *, keep_dict=False, **kwargs)

Library native completion retriever. Different for each library. No processing of output is done

Parameters:
  • text – Prompt or prompt obj

  • max_tokens – maximum tokens generated in completion

  • stop – Additional stop sequences

  • keep_dict – If library or API returns something else than raw tokens, whether to return native format

  • token_prob_delta – dict, relative added number for token logits

  • token_prob_abs – dict, Absolute logits for tokens

  • log_probs – int, when not None return the top X log probs and their tokens

  • kwargs – kwargs

Returns:

completion

create_native_generator(text, keep_dict=False, token_prob_delta=None, token_prob_abs=None, max_tokens=256, **kwargs)

Library native generator for tokens. Different for each library. No processing of output is done

Parameters:
  • text – Prompt or prompt obj

  • keep_dict – If library or API returns something else than raw tokens, whether to return native format

  • token_prob_delta – dict, Absolute logits for tokens

  • token_prob_abs – dict, relative added number for token logits

  • kwargs – kwargs

Returns:

generator

detokenize(toks)
get_n_tokens(text)

How many tokens are in a string

Parameters:

text – tokenizable text

Returns:

amount

get_remaining_credits()

How many credits are still available in the given API key

Returns:

remaining credits

static image_from_source(source)

Create Aleph compatible image from e.g. file, url etc.

Parameters:

source

Returns:

Aleph compatible image obj

multimodal_completion(prompt_list, max_tokens=256, stop=None, **kwargs)

Prompt the model using multimodal input

Parameters:
  • prompt_list – A list of texts and images.

  • max_tokens – Max tokens to return

  • stop – List of strings to stop at

  • kwargs – kwargs

Returns:

Text

pricing = {'luminous-bas': 0.03, 'luminous-base-control': 0.0375, 'luminous-extended': 0.045, 'luminous-extended-control': 0.05625, 'luminous-supreme': 0.175, 'luminous-supreme-control': 0.21875}

Pricing per token

pricing_factors = {'Complete': {'input': 1, 'output': 1.1}, 'Summarize': {'input': 1.3, 'output': 1.1}}

Pricing factor depending on model and whether it is prompt or output

pricing_img = {'luminous-base': 0.03024, 'luminous-extended': 0.04536}

Cost per processed image

pricing_meta = {'currency': 'credits', 'token_unit': 1000, '€/Credits': 0.2}
summarize(*, text=None, path_to_docx=None)

Summarize a text using the current model

Parameters:
  • text – Text to summarize

  • path_to_docx – Alternative to text. Summarize a .docx document

Returns:

summarized text as string

tokenize(text)

Text to token as vector representation

Parameters:

text

Returns:

List of tokens as ints

tokenize_as_str(text)

Text to token as vector representation but each token is converted to string

Parameters:

text

Returns:

List of tokens as strings