This is a beta module of NID System that empowers its users to leverage the capabilities of generative AI. This feature
makes use of completely localized Large Language Model and vector embedding stores to generate answers to users queries based on information available in Austria-Forum and NID library. This creates a cutting edge yet secure environment to encode the semantic meaning and context of text, allowing LLMs to understand context and judge similarity when returning answers to query prompts.
At the moment we are testing various embedding schemes and LLMs for optimal results. We are also exploring how effective the system performs on standard CPU servers and value additions in form of GPU based NID hosting infrastructure.
At the moment only limited document sources from NID are being added to NID-GPT vector store once the module matures the functionality will be extended to a larger dataset available in NID and Austria-Forum repository.
Searching
1. In order to ask a question, type a question into the search bar
like:
2.
What is NID Library System
3. Hit enter on your keyboard or click
Ask!
4. Wait (Please be patient!!) while the LLM model consumes the prompt and prepares the
answer. Currently, the system's processing is offloaded to the local CPU and a modest onboard GPU. In the future, adding a more powerful GPU cluster will enhance the system's response time and capabilities.
5. Once done, it will print the answer and the 4 sources it used
as context from your documents; you can then ask another question
without re-running the script, just wait for the prompt again.