How-tos & Documentation Use the YoKI AI Service

This documentation provides an overview of the functions of the university's open-source AI platform, YoKI. The service includes various large language models and its own user interface.

General information for using YoKI

  • Please read the terms of use and the privacy statement (see links).
  • At this time, the use of YoKI is only possible from within the university network or via VPN. More information about setting up our VPN can be found in the link bar.
  • The language of YoKI's web interface cannot be changed. However, it is possible to interact with the chatbot in several languages.
  • Due to data protection laws, at this time chats and system prompts are not being permanently saved. The web search function is not yet available.
  • If you have any questions about the YoKI platform, setting up our VPN service or our privacy statements and terms of use, please contact the IT Service.
     

Click here to go to the YoKI platform!

Table of Contents

The scope of YoKI

As of July 2025, the platform is comprised of four different large language models, which cover various areas of application, thus allowing users to perform application-specific work with YoKI. The models are designed for students, researchers and employees and will be presented in greater detail below.

These models are distinct from both the chatbot and the platform itself. You are chatting with the chatbot YoKI, which provides access to various language models.

You can configure both the individual models as well as the behavior of the chatbot yourself. More information about this can be found in the sections below.

Models available in YoKI

Momentan verfügbare Modelle: Llama, DeepSeek, CohereForAI, Qwen

LLaMA 3.1 (70B): Metas model is geared towards reasoning, instruction and translation tasks. This model is particularly suitable for daily use and general tasks. (Note: This model is set by default.)

DeepSeek-R1 Distill: This LLM is designed for everyday tasks and is particularly useful in environments where GPU memory is limited.

Aya-23-8B: The model is specially suited for multilingual tasks. It has been trained in 23 languages, including Arabic, Indonesian, Japanese, Romanian, Russian, German and English.

Qwen2.5-Coder-14B: Qwen is focused on generating, improving and enhancing code. It is capable of writing, debugging and annotating code in Python, C++ and other languages.

Select a model via model selector

On the web interface, you can select a model using the model selector on the left-hand side. In this view, the models cannot beconfigured nor can any further information about the model be displayed. You can find out more about configuring the models in the section “Configuring a model in the model settings.” 

Alternatively, you can select a model using the model settings. This will be documented in the section "Select a model with the model settings."

Note: The "LLaMA 3.1 (70B)" language model is set by default.

1. On the web interface, open model selection via the "Models" option on the left.

Rufen Sie links „Models“ auf.

2. In the center of the screen, select the model that you would like to use.

In der Modellauswahl können Sie Ihr Modell auswählen.

Access application and model settings

YoKI settings are split into model settings and application settings.

  • In the model settings, the language model (e.g., LLaMA or DeepSeek) can be selected and configured, if necessary.
  • In the application settings, settings can be made regarding the behavior of the YoKI chatbot.

Open YoKI settings

There are three ways to open the YoKI settings.

The first two options bring you directly to the model settings; the third option takes you to the application settings.

Option 1 (Go to the model settings)

On the homepage, click on the cog icon on the right next to the currently selected model. This is where you can also see which model has been selected. 

Note: You can only use this option when you are on the platform's homepage. You can navigate to the homepage by clicking on the YoKI logo on the top left.

Im Startbildschirm auf das Zahnrad klicken.

Option 2 (Go to the model settings)

Click on the model name found under the chat panel. The model name displays the currently selected model.

Klicken Sie auf den Modellnamen unter der Chatleiste.

Option 3 (Go to application settings)

Click on the option "Settings" found in the chat list on the left.

Wählen Sie links „Settings“ aus.

Note: For the third option, the model settings will not open directly because you are not accessing the settings via a model. Instead, you will open the application setting, which comprise only one part of the overall YoKI settings options. Since the model settings and the application settings can be found in the general YoKI settings, you can switch from one setting to another, as both can be found in the "Settings" window.

Select a model via model settings

In addition to direct model selection, it is possible to choose a model in the model settings.

Note: The "LLaMA 3.1 (70B)" language model is set by default.
 

1. In model settings on the left, you can select the model you would like to use.

2. Then click on “New Chat.” Your chats with YoKI will now use the selected model until the next change.

Wählen Sie links das Modell aus und drücken Sie dann auf "New Chat".

Configure a model in the model settings

In the model settings, you have the option of customizing the language models:

  • You can set a system prompt for each model.
  • You can generate a direct link for a model in YoKI.
  • You can have more information displayed about the model.

Create a system prompt

In the model settings, you can create a system prompt. With a system prompt, you are able to configure the behavior of a language model. A system prompt is the first command that will be processed by the model. The self-written system prompt can be defined almost without restrictions. You can configure your own system prompt for each individual language model.

If a user-defined system prompt has been configured, it will be displayed at the top of the chat. The example below will illustrate a possible use case. The system prompt example states that each output from YoKI (based on the language model LLaMA) should begin with “I am a language model.”

Note: Due to data protection laws, the system prompts will not be permanently saved or stored.

Ein Beispiel für ein Systemprompt

Create a direct link for a model

In the model settings, you can create a direct link for a model in YoKI. This can be useful if you want to bookmark YoKI with a particular model or want to send another person the link to the YoKI chatbot with the model you have selected.

Sie können sich einen Direktlink für YoKI kopieren.

Display more information about a model

In the model settings, you can find more information about a model. You can visit the model's official homepage as well as the Huggingface resource page for the model and its developers.

Sie können die Weiterleitungen auf die Modellseiten in den Modelleinstellungen aufrufen.

Configure the YoKI's behavior in the application settings

In addition to configuring the individual language models, it is also possible to control the behavior of the YoKI chatbot itself. This can be done in the application settings.

Configure output generation

You have the option of setting the platform's generation behavior. YoKI is able to display a model’s output successively or all at once. You can activate or deactivate this option in the application settings under the menu item “Disable streaming tokens.” If the option has been activated, the text will be generated and displayed in its entirety immediately after generation. If the option has been deactivated, the text will be generated and shown “live” concurrently with generation.

Hide emojis in chat titles

YoKI automatically generates a title for each chat with an emoji at the beginning of the title. You can disable these title emojis in the application settings under the menu item “Hide emoticons in conversation topics.” You can also individually and manually rename the chats.

Hier können Sie das Verhalten von YoKI anpassen.

Customize user interface

Light / Dark mode

You can choose YoKI's color scheme, by clicking the "Theme" option on the left side of the web interface. Currently, a light mode and dark mode are available.

Vorschau der auswählbaren Farbschemata

Delete chats

You can either delete chats individually or all chats at the same time.

Note: Due to date protection laws, chats will not be permanently saved or stored.

Delete chats individually

1. In the chat list on the left of the YoKI web interface, hold the mouse cursor over the chat you would like to delete and click on the wastebasket symbol.

Drücken Sie auf das Papierkorb-Symbol.

2. Confirm the deletion.

Bestätigen Sie den Löschvorgang.

Delete all chats

1. Open the application settings by clicking "Settings" in the chat list on the left of the page.

2. Click "Delete all conversations."

Drücken Sie auf „Delete all conversations“.

3. Confirm the deletion.

Bestätigen Sie den Löschvorgang.

Tips for prompts

What is a prompt?

A prompt is an instruction that is given to an AI system to produce a specific action. The AI system processes our human input to generate output (e.g., an image or text).

What does an effective prompt look like?

Here are some tips that you should keep in mind for prompt engineering:

1. Establish the context and a role.

  • Instead of simply asking a question, it is more useful to give the AI system a role and the situational context.
  • Example:
    • Not effective: "Write a blog entry on microservices."
    • ✅ Effective: "Write a technical blog entry about the patterns for microservice architecture as a seasoned software architect with fifteen years of experience building distributed systems.

2. Set clearly defined goals and limitations

  • Include your desired result as well as any potential limitations. This allows the AI system to work more precisely on your problem or question.
  • Example:
    • Not effective: "Help me with my Python code."
    • ✅ Effective: “Check this python code for performance improvement. Concentrate on reducing memory requirements and improving runtime complexity.”

3. Define the output format

  • Precisely define how you want the information to be structured.
  • Example:
    • Not effective: "Tell me something about different databank types."
    • ✅ Effective: "Compare NoSQL and SQL databanks in this format: usage scenarios, performance characteristics and scalability considerations.”

As a general rule, your prompts should be detailed and specify what you expect from the AI system.