Embedding
step
¶
consists of DVCSteps to embedd files and save them as for example as csv.
Classes¶
Embedded
¶
EmbeddingStep
¶
Bases: SimpleSplitterStep
, TypedStep[EmbeddingSettings, list[MarkdownDataContract], DataFrame[EmbeddingResult]]
Step for consuming list[MarkdownDataContract] and returning DataFrame[EmbeddingResult].
Source code in wurzel/steps/embedding/step.py
39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 |
|
Functions¶
__md_to_plain(element, stream=None)
classmethod
¶
Converts a markdown element into plain text.
Parameters¶
element : Element The markdown element to convert. stream : StringIO, optional The stream to which the plain text is written. If None, a new stream is created.
Returns:¶
str The plain text representation of the markdown element.
Source code in wurzel/steps/embedding/step.py
get_embedding_input_from_document(doc)
¶
Clean the document such that it can be used as input to the embedding model.
Parameters¶
doc : MarkdownDataContract The document containing the page content in Markdown format.
Returns:¶
str Cleaned text that can be used as input to the embedding model.
Source code in wurzel/steps/embedding/step.py
get_simple_context(text)
¶
Simple function to create a context from a text.
Source code in wurzel/steps/embedding/step.py
is_stopword(word)
¶
run(inpt)
¶
Executes the embedding step by processing input markdown files, generating embeddings, and saving them to a CSV file.
Source code in wurzel/steps/embedding/step.py
whitespace_word_tokenizer(text)
classmethod
¶
step_multivector
¶
consists of DVCSteps to embedd files and save them as for example as csv.
Classes¶
EmbeddingMultiVectorStep
¶
Bases: EmbeddingStep
, TypedStep[EmbeddingSettings, list[MarkdownDataContract], DataFrame[EmbeddingMultiVectorResult]]
Step for consuming list[MarkdownDataContract] and returning DataFrame[EmbeddingMultiVectorResult].
Source code in wurzel/steps/embedding/step_multivector.py
Functions¶
run(inpt)
¶
Executes the embedding step by processing a list of MarkdownDataContract objects, generating embeddings for each document, and returning the results as a DataFrame.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
inpt | list[MarkdownDataContract] | A list of markdown data contracts to process. | required |
Returns:
Type | Description |
---|---|
DataFrame[EmbeddingMultiVectorResult] | DataFrame[EmbeddingMultiVectorResult]: A DataFrame containing the embedding results. |
Raises:
Type | Description |
---|---|
StepFailed | If all input documents fail to generate embeddings. |
Logs
- Warnings for documents skipped due to EmbeddingAPIException.
- A summary warning if some or all documents are skipped.
Source code in wurzel/steps/embedding/step_multivector.py
settings
¶
Classes¶
EmbeddingSettings
¶
Bases: SplitterSettings
EmbeddingSettings is a configuration class for embedding-related settings.
Attributes:
Name | Type | Description |
---|---|---|
API | Url | The API endpoint for embedding operations. |
NORMALIZE | bool | A flag indicating whether to normalize embeddings. Defaults to False. |
BATCH_SIZE | int | The batch size for processing embeddings. Must be greater than 0. Defaults to 100. |
TOKEN_COUNT_MIN | int | The minimum token count for processing. Must be greater than 0. Defaults to 64. |
TOKEN_COUNT_MAX | int | The maximum token count for processing. Must be greater than 1. Defaults to 256. |
TOKEN_COUNT_BUFFER | int | The buffer size for token count. Must be greater than 0. Defaults to 32. |
STEPWORDS_PATH | Path | The file path to the stopwords file. Defaults to "data/german_stopwords_full.txt". |
N_JOBS | int | The number of parallel jobs to use. Must be greater than 0. Defaults to 1. |
PREFIX_MAP | dict[Pattern, str] | A mapping of regex patterns to string prefixes. This is validated and transformed using the |
Methods:
Name | Description |
---|---|
_wrap_validator_model_mapping | dict[str, str], handler): A static method to wrap and validate the model mapping. It converts string regex keys in the input dictionary to compiled regex patterns and applies a handler function to the result. |