Context Processors¶
Context processors implement context-only logic. They are responsible for creating and updating context keys, but they never see the full context object directly and they never touch the data channel.
High-level model¶
At runtime, a context processor is invoked roughly as follows:
The node builds the pipeline graph and resolves parameters (node configuration > context > defaults).
The node constructs a validating context observer.
The node calls
processor.operate_context(context, context_observer, **params).The processor executes its internal logic and uses notifier hooks to propose context updates or deletions.
The observer validates and applies those changes to the active context.
The key points for authors:
You do not override
process/operate_context.You never receive the context object as a user-facing argument.
You implement a stateless
_process_logicmethod and use notifier helpers such as_notify_context_updateand_notify_context_deletion.You declare which context keys you may create or suppress so that validation and inspection can remain deterministic.
Built-in context processor utilities¶
Semantiva provides factory functions and string-based shortcuts for common context processor patterns. These are resolved automatically when you use them as processor names in pipelines:
- rename:src:dst
Rename a context key from
srctodst. Reads the original value and writes the new key, then suppresses the original.Example:
"processor": "rename:input_key:output_key"- delete:key
Delete a context key after resolution. If the key is present in context, it is removed and suppressed from downstream processors.
Example:
"processor": "delete:temp_value"- template:”template_string”:output_key
Render a template string using existing context keys and store the result in a new key. Placeholders like
{key_name}are replaced with resolved context values.Example:
"processor": "template:result_{run_id}.txt:path"
All built-in utilities are resolved by the pipeline at runtime and behave like regular context processors: they declare which keys they create/suppress and request mutations via notifier hooks.
Minimal example¶
A typical pattern looks like this:
from semantiva.context_processors import ContextProcessor
class ComputeLearningRate(ContextProcessor):
"""Derive learning rate from batch size."""
@classmethod
def get_created_keys(cls) -> list[str]:
# Keys this processor may create
return ["training.learning_rate"]
def _process_logic(self, *, batch_size: int, base_lr: float = 0.1):
# ``batch_size`` and ``base_lr`` are resolved by Semantiva from
# node parameters and/or context; no Context object is passed in.
lr = base_lr / max(batch_size, 1)
self._notify_context_update("training.learning_rate", lr)
Notes:
Parameters from YAML (for example
base_lr: 0.05) are not passed through the constructor; they are resolved and injected into_process_logic.Any context value read by the processor is resolved into a parameter as well (here:
batch_size).All writes go through
_notify_context_update/_notify_context_deletionand are validated by the context observer.
Context invariants¶
_process_logicreceives only the runtime parameters resolved by the node; it must not acceptContextType.Reads and writes go through the validating context observer created by the node; use
_notify_context_update/_notify_context_deletionfor all mutations.
YAML configuration¶
In a pipeline, the corresponding node might look like:
pipeline:
nodes:
- processor: ComputeLearningRate
parameters:
base_lr: 0.05
batch_size: 128 # may also come from context
The parameter resolution rules are the same as for data processors:
Node parameters
Then context
Then default values from the function signature
Design guidelines¶
Keep context processors small and composable.
Use clear, namespaced keys (for example
training.learning_rate).Declare created/suppressed keys via
get_created_keys/get_suppressed_keysso that observers can validate behaviour.Use context processors to prepare domain-specific state; keep data processors focused on computations over the data channel itself.
For deeper architectural details, see Context Processing Architecture.