This issue can have multiple causes. Overall, our aim is to optimise response speed while maintaining high-quality outputs.
Speed can vary due to:
Selected large language model (LLM): different LLMs have different response rates. We generally offer Gemini models as they are currently the best on the market. You can use the Pro models for complex tasks. For simpler tasks, use the Flash models. Overall, Flash models have a quicker response time, while Pro models can handle bigger inputs and more complex tasks. We are model agnostic, meaning you can always switch to other popular models, such as those from OpenAI.
Websearch: if you are using web search, crawling different websites can be time-consuming, which makes answers slower.
External systems: if you are using a custom connector to an external system, this could be the reason, as some APIs are quite slow.
Firewall: If you are hosting Genow in your tenant, ensure that your firewall is not slowing down Genow unnecessarily.
Are you missing use cases, knowledge assets or certain features? Please re-log-in first. If you cannot find certain features or sources of knowledge in the Use Case Hub, please contact your administrator or IT department.
It can have multiple causes. Overall, make sure you ask well-formulated questions with enough context. Try using different prompting techniques. Most of the time, this is not a technical issue, but rather an issue of missing context or poor data quality.
Make sure the data is synced and ingested in Genow.
If you cannot find certain information in existing files, please ensure that the data quality is good and that the document follows our data guidelines. This is especially important for tables. You can check whether the content of your table was understood correctly by opening the table’s source after receiving an answer and navigating to the important text snippets. You should find a Markdown with “START_OF_TABLE” (if not, your table was not recognised as such in the first place). Copy the contents and paste it to the chat. Add the prompt ‘Please provide me with the table of this Markdown’. If you receive the correct table, it will have been correctly understood by the parser, which scans the information in documents.
Sometimes, the layout parser will not be able to interpret complex layout designs. You can check this in the same way as you did for the tables above, by looking at the referenced text snippets on the right-hand side of the document preview.
Overall, LLMs are statistical models that attempt to predict the next word. LLMs do this using semantic logic. For example, if you ask Genow a question about how something works, it will “prefer” to answer with a ‘how-to’ guide instead of a contact person. With this in mind, update your data to reflect the questions users would ask and the answers they would generally expect.
Not happy with the output format? Try adding the desired format to your prompt. You can manage repetitive requirements regarding output formats or tasks by adding agents. If you want influence every user query, you can also set a tonality in the use case settings.
If you can see a maintenance banner in Genow, either one of our services is experiencing downtime, or we are updating your tenant. Rest assured that your Genow platform will be back to normal soon, and we are working on a solution.