top of page
Search

A New Era Begins for GenAI-Based Assistants with Jetlink Genius

Updated: Dec 11


At Jetlink, we developed the Jetlink Genius platform to bring the power of advanced Generative AI (GenAI) technologies to the corporate world. Jetlink Genius integrates large language models (LLM) and advanced natural language processing (NLP) methods to offer scalable and reliable virtual assistant solutions tailored to corporate needs. In this article, we will discuss the capabilities of Jetlink Genius, its system components, security and data management methods, LLM orchestration structure, and solutions for open-source LLMs from a technical perspective.


What is GenAI?

Generative AI (GenAI) is a broad term for AI models that can generate text, images, audio, and other content. GenAI systems, particularly with transformer architecture and large language models (LLM), have quickly evolved, achieving complex content creation capabilities. The foundation of GenAI was laid by the transformer architecture introduced in the paper "Attention is All You Need" by Google Brain in 2017. The transformer architecture has significantly enhanced language models’ ability to understand text, learn context, and produce original content using attention mechanisms.

Today, there are numerous models in the GenAI field. Some of these models are cloud-based, while others are developed as open-source. Popular cloud-based models include OpenAI's GPT-4, Google’s Palm 2, Anthropic's Claude, and Cohere's Command, while Meta’s LLaMA series stands out in the open-source ecosystem. Each of these models is transformer-based, containing billions of parameters and specializing in language understanding and generation. Jetlink Genius integrates the power of these models into corporate solutions, opening a new era for virtual assistants.


What is Jetlink Genius?

Jetlink Genius is a highly scalable, modular GenAI platform developed by Jetlink. This platform allows for adapting language models and GenAI algorithms to meet corporate needs. Jetlink Genius is integrated into Jetlink’s virtual assistant infrastructure and can generate responses using various LLM models (both cloud-based and open-source). Through this integration, companies can select the most suitable large language model and create their knowledge bases to offer a customized AI experience.

A key component of Jetlink Genius is its flexibility in model selection. Corporate firms can choose between cloud-based LLM models and open-source models that can operate in a fully isolated environment for security reasons. The platform enables organizations to create their knowledge bases, allowing responses to be generated based on this information; this provides significant advantages in corporate data security and privacy. As a result, Jetlink Genius generates responses using only the company's data sources, ensuring secure and accurate outcomes.


Corporate Knowledge Bases and RAG Integration

One of Jetlink Genius' most powerful features is its support for Retrieval-Augmented Generation (RAG). RAG is a technique that allows a language model to retrieve data directly from a knowledge base while generating responses. With RAG technology, Jetlink Genius can create customized knowledge bases for companies, enabling virtual assistants to produce responses based on these sources. Corporate information becomes instantly usable by the LLM, with information gathered from various data sources such as PDFs, Microsoft Office documents, websites, and APIs integrated into the system as knowledge bases.


Security and Data Privacy

Jetlink Genius was developed to meet high data security and privacy standards. Security layers such as encryption, data masking, and anonymization are actively used during data transmission. Jetlink Genius adheres to Jetlink's firewall policies, ensuring the protection of sensitive information during data flow. The platform filters sensitive data before generating each response, preventing it from being transmitted externally, ensuring compliance with data protection regulations like GDPR and KVKK.


Jetlink Genius Components

The advanced infrastructure of Jetlink Genius comprises several modular components, each designed to make GenAI-based virtual assistants functional and efficient in the corporate world.


  1. LLM Model Selection Assistant

    The LLM model selection assistant is a critical component of the Jetlink Genius platform, designed to help users choose the most suitable LLM model based on their needs. For instance, if speed and cost are prioritized, smaller parametric models are suggested, while larger parametric models are recommended for accuracy and complexity. This assistant considers factors like parameter analysis, user requirements, and context analysis to select the right model among both cloud-based and open-source options.


  2. LLM Model Orchestrator

    The model orchestrator module in Jetlink Genius is designed to answer incoming questions using the most suitable LLM model. It can run multiple models in parallel and seamlessly switch between them based on specific scenarios. For example, large parametric models are used for complex customer queries, while simple questions are handled by smaller models that can respond quickly. This ensures that user requests are met in the most optimal way.


  3. Hallucination Detector

    A major issue with GenAI models is hallucination – where the model generates responses with irrelevant or incorrect information. Jetlink Genius addresses this problem with a hallucination detector module that analyzes responses for accuracy by filtering them through logic and factual checks. If illogical statements or unrealistic information are detected, the model is prompted to regenerate the response.


  4. Question and Content Filter

    When sensitive information is detected in user queries, this information is filtered and not sent to cloud-based models. This module is specially configured to protect personal data and ensure compliance with privacy regulations. When sensitive information such as personal or financial data is detected, it is processed using masking methods to ensure a secure flow.


  5. Open-Source LLM and Fine-Tuning

    Jetlink Genius also supports open-source LLM models to meet corporate security requirements. This allows language models to operate within a fully isolated environment within a company’s data center. Meta’s LLaMA 3.2, for example, can be effectively used within Jetlink Genius. These models can be fine-tuned to fit a company's terminology and needs, creating cost-effective solutions with smaller parametric models that can run on CPUs. This fine-tuning process enables the model to produce responses that reflect the language and specific needs of the company.


  6. Jetlink Genius RAG (Retrieval-Augmented Generation

    Retrieval-Augmented Generation (RAG) is a powerful technology infrastructure within the Jetlink Genius platform that enables the generation of highly accurate responses based on corporate knowledge bases. RAG is designed to improve accuracy in AI systems that generate content based on large data sets by combining vector-based information retrieval with a quick query response system. This technology is built on integrating language models with corporate knowledge banks, utilizing vector-based information retrieval and rapid query response systems.


RAG and Vector Database Usage

To support its RAG infrastructure, Jetlink Genius uses a vector database. Unlike traditional key-value pairs or relational databases, vector databases store data in high-dimensional vectors. Information from various sources, such as PDF documents, web pages, API outputs, or office documents, is vectorized and transformed into a vector field. Each data piece is represented as contextually meaningful vectors indexed in the vector database.

This allows Jetlink Genius to perform a rapid preliminary query in the vector database before routing a query to the language model. By using semantic proximity calculations, the query is analyzed in the vector database, and the most relevant information pieces are retrieved. This is particularly effective for improving query efficiency and accuracy in large data sets. Vector databases are optimized to quickly resolve similarity-based queries, unlike traditional database searches. Jetlink Genius performs high-speed nearest neighbor searches using vector search algorithms like HNSW (Hierarchical Navigable Small World).


Vectorized Knowledge Bases

Jetlink Genius integrates RAG with vectorized knowledge bases created specifically for the company. The vectorization process represents each document or content piece in the knowledge base as high-dimensional vectors in a format that the language model can understand. These vector representations group information based on semantic structures learned by the language model, allowing rapid query access even in knowledge bases with extensive data.

Jetlink Genius’s vector database can store millions of information pieces, instantly calculating similarity scores for each query to present the most relevant content to the language model. The language model selects the highest-scoring vectors (most relevant information pieces) and processes them in a priority order. As a result, the model produces responses using only the most pertinent data from the knowledge base, achieving superior performance in both speed and accuracy.


Query Answering with Semantic Similarity and Vector Representations

Jetlink Genius converts each user query into a vector representation comprehensible to the language model and matches this query with vectors in the knowledge base using semantic similarity calculations. Jetlink Genius identifies the nearest neighbors between the query vector and knowledge base vectors using similarity metrics like cosine similarity and Euclidean distance, enabling the fastest retrieval of the most relevant information pieces to the query and incorporating them into the response generation process.

For example, when a customer asks a Jetlink Genius-powered virtual assistant about a specific product, the query is matched with similar product descriptions, usage instructions, or support documents within the knowledge base. The RAG system uses nearest neighbor algorithms to select the most suitable vectors for the query, and the language model generates a meaningful response based on this data. This approach results in responses with high accuracy and up-to-date information.


Advantages of Vector Databases and RAG in Jetlink Genius

  • High Performance and Quick Response: Vector databases optimize query performance with vector-based algorithms (such as HNSW), enabling Jetlink Genius to provide fast response times even in corporate environments with large-scale knowledge bases.


  • Scalability with Big Data: The vector-based infrastructure can accommodate millions of data points and process them in real time, making it ideal for companies with extensive knowledge bases.


  • High Accuracy and Relevance: Semantic similarity calculations select the most appropriate information pieces for each query, providing more accurate responses. The RAG structure provides meaningful and up-to-date data to the language model.


  • Flexible Data Integration: Jetlink Genius can integrate information from various data formats, including PDFs, DOCs, web pages, and API outputs, transforming corporate data sources into a comprehensive knowledge base.

In summary, the RAG and vector database infrastructure of Jetlink Genius enables the language model to generate contextually appropriate, accurate, and up-to-date responses. This technology enhances user experience by providing quick and effective access to information.


7. Multi-Source Data Retention

The multi-source data retention feature of Jetlink Genius enables the consolidation of different data sources into a single knowledge base. Data from various formats, such as PDF, Word, Excel, websites, and APIs, can be integrated and structured into knowledge banks. This feature allows organizations to quickly build knowledge banks using their own data, providing a rich source of information for language models.


8. Security Layers

Jetlink Genius includes multiple security layers to fully meet corporate security requirements. All encryption protocols used in data flow protect user data securely. The platform operates in compliance with Jetlink's firewall policies, applying the highest security measures to safeguard sensitive data and adhere to regulations like GDPR. Additionally, Jetlink Genius’s data masking and anonymization features ensure the protection of sensitive information within the system.


In-Depth Look at LLM Orchestration

Jetlink Genius's LLM (Large Language Model) orchestration infrastructure optimizes the process of selecting the most suitable model for each incoming query. The orchestration system dynamically identifies and activates the ideal model among the various LLMs available on the platform based on the complexity and accuracy requirements of user queries. This allows seamless transitions between smaller parametric models for simpler queries and larger parametric models for more complex, context-rich responses. The model selection assistant evaluates each query's context and determines the most appropriate model, considering factors like processing time, accuracy, and data sensitivity, ensuring optimal system performance.

This orchestration process is also essential in handling model transitions. For example, Jetlink Genius can sequentially activate multiple models within the same query and, if necessary, switch between them. This approach is especially useful when answering multi-stage questions, as it enables a different model to be selected for each stage's context. Furthermore, integrated with a hallucination detector, this system evaluates each model's response for accuracy and, if necessary, prompts the model to generate a new response. The multi-layered, dynamic LLM orchestration infrastructure in Jetlink Genius increases both response accuracy and system scalability, optimizing efficiency and response times, resulting in an enhanced user experience. This structure makes Jetlink Genius a high-performance, adaptable AI solution for corporate environments.


The World of Open-Source LLMs

Jetlink Genius provides crucial advantages, such as flexibility, security, and cost-efficiency, by integrating open-source LLM models into corporate solutions. Open-source LLM models are ideal for companies that prioritize corporate data privacy and compliance with local regulations. Jetlink Genius effectively works with powerful open-source LLM models like Meta's LLaMA 3.2, adapting language models to be fine-tuned with a company's specific data sets. This allows companies to develop a model that produces customized results that reflect their unique terminology and context. The fine-tuning process ensures the model learns only from targeted information, increasing the accuracy and specificity of responses.


One of the advantages of open-source LLM models is that Jetlink Genius can be configured based on the required processing power. For example, some open-source models like LLaMA can be run on CPUs with small parametric versions, eliminating the need for GPUs and reducing costs. Jetlink Genius allows easy transitions between models of different parametric sizes, enabling optimization in terms of both cost and speed. When used with RAG (Retrieval-Augmented Generation) infrastructure, open-source models become highly effective, generating targeted responses based solely on corporate knowledge base data. For example, responses can be shaped using internal policy documents, customer support documents, or technical documentation, increasing model accuracy while reducing hallucination risks.

Another advantage of open-source models is that they can operate entirely in a secure environment without relying on external cloud systems, maintaining data privacy and security at the highest level. Jetlink Genius operates open-source LLM models in an isolated infrastructure, ensuring data privacy and GDPR compliance. This structure makes Jetlink Genius a cost-effective and highly secure solution for corporate environments.


Conclusion

Jetlink Genius stands out by providing a robust, flexible, and reliable infrastructure that meets the needs of GenAI-based assistants in the corporate world. The platform is optimized to meet every corporate requirement, with advanced LLM orchestration, RAG-supported vector database infrastructure, and a wide range of open-source LLM integration options. Thanks to Jetlink Genius's model selection assistant and dynamic model transitions, each user query is matched with the most suitable model, delivering high accuracy rates and fast response times. Jetlink’s powerful orchestration structure, combined with innovative components like the hallucination detector, provides a high-accuracy and reliable AI solution that continuously improves the user experience.

One of Jetlink Genius’s most important advantages is its complete support for corporate data security and compliance requirements. By enabling the use of open-source LLM models, Jetlink Genius allows organizations to operate in closed and secure environments within their data centers. Fully compliant with data privacy and security regulations like GDPR, Jetlink Genius can be used with confidence in sectors that require high data security—such as banking, finance, and healthcare—while ensuring data privacy. With the support of RAG and vector database integration, companies can limit language model responses to verified corporate information, providing accurate and reliable responses while achieving high accuracy rates in corporate information management.


In summary, Jetlink Genius differentiates itself by offering innovative features that adapt large language models to the corporate world. Companies can leverage Jetlink Genius to develop virtual assistants that make a difference in customer service, technical support solutions, and content management. Thanks to advanced LLM management and security layers, Jetlink Genius combines high performance and compliance, creating the ideal environment for AI solutions. To explore Jetlink Genius and try out this new technology in your organization, contact us at hello@jetlink.io and take advantage of the benefits this innovative GenAI infrastructure offers.

bottom of page