Basics#

As Langworks builds on top of Pypeworks, many of Pypeworks’ basic concepts are also relevant to Langworks. To this foundation Langworks adds several new concepts specific to working with LLMs.

Query#

A Query is a specialised Pypeworks Node providing an interface to a LLM. Instead of taking a function, a query takes a prompt:

Query(
    query = "Explain like I am five what the Python library Langworks is used for."
)

Unlike a typical prompt, these prompts are templatable using Langwork’s built-in template language, based on the Jinja template language. This allows for access to any arguments passed to the query:

Query(
    query = "Explain like I am five what {{ input }} is used for."
)

In addition a query may provide guidance on how the LLM should handle the prompt, using Langwork’s built-in dynamic template language:

Query(

    query = (
        """
        What weighs more: {{ input[0] }} or {{ input[1] }}? Think step-by-step before stating \\
        your final answer, either '{{ input[0] }}' or '{{ input[1] }}', delimited by triple \\
        asterisks (i.e. ***{{ input[0] }}*** or ***{{ input[1] }}***).
        """
    ),

    guidance = (
        """
        Let's think step-by-step\\
        {% gen params = Params(stop = ["***"], include_stop = True) %}\\
        {% choice [input[0], input[1]], params = Params(temperature = 0)}\\
        ***
        """
    )
)

Hint

Langworks automatically dedents and strips texts passed to query, guidance and history. When working with text blocks, as delimited by triple quotes (""""), Langworks can also remove unwanted newlines, by ending lines with \\.

Input passed by argument may be complemented by input passed by context. To do so, assign a lookup table to the query’s context argument, as you would typically do in Jinja:

Query(

    query = (
        "Tell me more about {{input}} in relation to {{topic}}."
    ),

    context = dict(
        topic = "humans"
    )
)

Messages and history#

Interaction with a conversational LLM may consist of a back and forth of messages. Sometimes it is desirable to prefill such conversation to steer further interaction, i.e. to define a system prompt, or to restore an earlier conversation. This may be done by assigning a langworks.messages.Thread to Query’s history argument:

Query(

    history = [
        {
            "content": "You are a helpful assistant.",
            "role": "system"
        }
    ]
)

Langwork#

Just as a Query may be likened to a Pypeworks Node, a Langwork may be likened to a pipework. In fact, a langwork may be used just like a pipework, composing nodes and queries into a directed acyclic graph. Herein nodes and queries may be seamlessly intertwined:

langwork = (

    Langwork(

        # Nodes / queries
        selector = Query(
            query = "What is most well-known {{input}}?"
        ),

        wikifier = Query(
            query = (
                """
                Can you give me a brief Wikipedia-like article in Markdown describing the \\
                {{input}} of your choice?"
                """
            )
        ),

        extractor = Node(
            lambda *args, history = [], context = {}: history[-1]
        )

        # Connections
        connections = [
            Connection("enter"     , "selector"),
            Connection("selector"  , "wikifier"),
            Connection("wikifier"  , "extractor"),
            Connection("extractor" , "exit")
        ]

    )

)

for cat in ["dog", "cat"]:
    print(langwork(cat))

Langworks expands upon this by providing various utilities to ease working with LLMs. Langworks may enforce common prompt histories and template contexts, as well as specify a common middleware to interface with LLMs.

Middleware#

Middleware abstracts away the code needed to interface with specific LLM providers. Within Langworks all middleware must satisfy a common interface, making it ease to interchange LLM providers. In fact, as both Query and Langwork provide hooks for middleware, LLM providers may be easily mixed:

from langworks.middleware.vllm import (
    SamplingParams,
    vLLM
)

langwork = (

    Langwork(

        # Config
        middleware = vLLM(
            url          = "http://127.0.0.1:4000/v1",
            model        = "meta-llama/Meta-Llama-3-8B-Instruct",
            params       = SamplingParams(temperature = 0.3)
        ),

        # Queries
        gen_plan = Query(

            query = "Give a step-by-step explanation how {{input}} may be implemented."

            # Uses middleware attached to langwork.

        ),

        extract_plan = Node(
            lambda history = [], context = {}: history[-1]
        ),

        gen_python = Query(

            query = (
                """
                Write me a function in Python that implements the computation detailed below:

                {{input}}
                """
            ),

            # Uses a different middleware to access a model specialised in code generation.
            middleware = vLLM(
                url          = "http://127.0.0.1:4001/v1",
                model        = "mistralai/Codestral-22B-v0.1",
                params       = SamplingParams(temperature = 0.3)
            )
        ),

        extract_code = Node(
            lambda history = [], context = {}: history[-1]
        ),

        # Connections
        connections = [
            Connection("enter"        , "gen_plan"),
            Connection("gen_plan"     , "extract_plan"),
            Connection("extract_plan" , "gen_python"),
            Connection("gen_python"   , "extract_code"),
            Connection("extract_code" , "exit")
        ]

    )

)

for(challenge in ["quick sort", "A* path finding"]):
    print(langwork(challenge))