chore: format project
This commit is contained in:
parent
79d4d87f24
commit
cd7722afdb
31 changed files with 504 additions and 409 deletions
|
@ -1,13 +1,17 @@
|
|||
# How to Contribute to Perplexica
|
||||
|
||||
Hey there, thanks for deciding to contribute to Perplexica. Anything you help with will support the development of Perplexica and will make it better. Let's walk you through the key aspects to ensure your contributions are effective and in harmony with the project's setup.
|
||||
Hey there, thanks for deciding to contribute to Perplexica. Anything you help with will support the development of
|
||||
Perplexica and will make it better. Let's walk you through the key aspects to ensure your contributions are effective
|
||||
and in harmony with the project's setup.
|
||||
|
||||
## Project Structure
|
||||
|
||||
Perplexica's design consists of two main domains:
|
||||
|
||||
- **Frontend (`ui` directory)**: This is a Next.js application holding all user interface components. It's a self-contained environment that manages everything the user interacts with.
|
||||
- **Backend (root and `src` directory)**: The backend logic is situated in the `src` folder, but the root directory holds the main `package.json` for backend dependency management.
|
||||
- **Frontend (`ui` directory)**: This is a Next.js application holding all user interface components. It's a
|
||||
self-contained environment that manages everything the user interacts with.
|
||||
- **Backend (root and `src` directory)**: The backend logic is situated in the `src` folder, but the root directory
|
||||
holds the main `package.json` for backend dependency management.
|
||||
|
||||
## Setting Up Your Environment
|
||||
|
||||
|
@ -22,18 +26,23 @@ Before diving into coding, setting up your local environment is key. Here's what
|
|||
|
||||
### Frontend
|
||||
|
||||
1. Navigate to the `ui` folder and repeat the process of renaming `.env.example` to `.env`, making sure to provide the frontend-specific variables.
|
||||
1. Navigate to the `ui` folder and repeat the process of renaming `.env.example` to `.env`, making sure to provide the
|
||||
frontend-specific variables.
|
||||
2. Execute `npm install` within the `ui` directory to get the frontend dependencies ready.
|
||||
3. Launch the frontend development server with `npm run dev`.
|
||||
|
||||
**Please note**: Docker configurations are present for setting up production environments, whereas `npm run dev` is used for development purposes.
|
||||
**Please note**: Docker configurations are present for setting up production environments, whereas `npm run dev` is used
|
||||
for development purposes.
|
||||
|
||||
## Coding and Contribution Practices
|
||||
|
||||
Before committing changes:
|
||||
|
||||
1. Ensure that your code functions correctly by thorough testing.
|
||||
2. Always run `npm run format:write` to format your code according to the project's coding standards. This helps maintain consistency and code quality.
|
||||
3. We currently do not have a code of conduct, but it is in the works. In the meantime, please be mindful of how you engage with the project and its community.
|
||||
2. Always run `npm run format:write` to format your code according to the project's coding standards. This helps
|
||||
maintain consistency and code quality.
|
||||
3. We currently do not have a code of conduct, but it is in the works. In the meantime, please be mindful of how you
|
||||
engage with the project and its community.
|
||||
|
||||
Following these steps will help maintain the integrity of Perplexica's codebase and facilitate a smoother integration of your valuable contributions. Thank you for your support and commitment to improving Perplexica.
|
||||
Following these steps will help maintain the integrity of Perplexica's codebase and facilitate a smoother integration of
|
||||
your valuable contributions. Thank you for your support and commitment to improving Perplexica.
|
||||
|
|
101
README.md
101
README.md
|
@ -8,24 +8,29 @@
|
|||
- [Preview](#preview)
|
||||
- [Features](#features)
|
||||
- [Installation](#installation)
|
||||
- [Getting Started with Docker (Recommended)](#getting-started-with-docker-recommended)
|
||||
- [Non-Docker Installation](#non-docker-installation)
|
||||
- [Ollama connection errors](#ollama-connection-errors)
|
||||
- [Getting Started with Docker (Recommended)](#getting-started-with-docker-recommended)
|
||||
- [Non-Docker Installation](#non-docker-installation)
|
||||
- [Ollama connection errors](#ollama-connection-errors)
|
||||
- [Using as a Search Engine](#using-as-a-search-engine)
|
||||
- [One-Click Deployment](#one-click-deployment)
|
||||
- [Upcoming Features](#upcoming-features)
|
||||
- [Support Us](#support-us)
|
||||
- [Donations](#donations)
|
||||
- [Donations](#donations)
|
||||
- [Contribution](#contribution)
|
||||
- [Help and Support](#help-and-support)
|
||||
|
||||
## Overview
|
||||
|
||||
Perplexica is an open-source AI-powered searching tool or an AI-powered search engine that goes deep into the internet to find answers. Inspired by Perplexity AI, it's an open-source option that not just searches the web but understands your questions. It uses advanced machine learning algorithms like similarity searching and embeddings to refine results and provides clear answers with sources cited.
|
||||
Perplexica is an open-source AI-powered searching tool or an AI-powered search engine that goes deep into the internet
|
||||
to find answers. Inspired by Perplexity AI, it's an open-source option that not just searches the web but understands
|
||||
your questions. It uses advanced machine learning algorithms like similarity searching and embeddings to refine results
|
||||
and provides clear answers with sources cited.
|
||||
|
||||
Using SearxNG to stay current and fully open source, Perplexica ensures you always get the most up-to-date information without compromising your privacy.
|
||||
Using SearxNG to stay current and fully open source, Perplexica ensures you always get the most up-to-date information
|
||||
without compromising your privacy.
|
||||
|
||||
Want to know more about its architecture and how it works? You can read it [here](https://github.com/ItzCrazyKns/Perplexica/tree/master/docs/architecture/README.md).
|
||||
Want to know more about its architecture and how it works? You can read
|
||||
it [here](https://github.com/ItzCrazyKns/Perplexica/tree/master/docs/architecture/README.md).
|
||||
|
||||
## Preview
|
||||
|
||||
|
@ -35,18 +40,24 @@ Want to know more about its architecture and how it works? You can read it [here
|
|||
|
||||
- **Local LLMs**: You can make use local LLMs such as Llama3 and Mixtral using Ollama.
|
||||
- **Two Main Modes:**
|
||||
- **Copilot Mode:** (In development) Boosts search by generating different queries to find more relevant internet sources. Like normal search instead of just using the context by SearxNG, it visits the top matches and tries to find relevant sources to the user's query directly from the page.
|
||||
- **Normal Mode:** Processes your query and performs a web search.
|
||||
- **Copilot Mode:** (In development) Boosts search by generating different queries to find more relevant internet
|
||||
sources. Like normal search instead of just using the context by SearxNG, it visits the top matches and tries to
|
||||
find relevant sources to the user's query directly from the page.
|
||||
- **Normal Mode:** Processes your query and performs a web search.
|
||||
- **Focus Modes:** Special modes to better answer specific types of questions. Perplexica currently has 6 focus modes:
|
||||
- **All Mode:** Searches the entire web to find the best results.
|
||||
- **Writing Assistant Mode:** Helpful for writing tasks that does not require searching the web.
|
||||
- **Academic Search Mode:** Finds articles and papers, ideal for academic research.
|
||||
- **YouTube Search Mode:** Finds YouTube videos based on the search query.
|
||||
- **Wolfram Alpha Search Mode:** Answers queries that need calculations or data analysis using Wolfram Alpha.
|
||||
- **Reddit Search Mode:** Searches Reddit for discussions and opinions related to the query.
|
||||
- **Current Information:** Some search tools might give you outdated info because they use data from crawling bots and convert them into embeddings and store them in a index. Unlike them, Perplexica uses SearxNG, a metasearch engine to get the results and rerank and get the most relevant source out of it, ensuring you always get the latest information without the overhead of daily data updates.
|
||||
- **All Mode:** Searches the entire web to find the best results.
|
||||
- **Writing Assistant Mode:** Helpful for writing tasks that does not require searching the web.
|
||||
- **Academic Search Mode:** Finds articles and papers, ideal for academic research.
|
||||
- **YouTube Search Mode:** Finds YouTube videos based on the search query.
|
||||
- **Wolfram Alpha Search Mode:** Answers queries that need calculations or data analysis using Wolfram Alpha.
|
||||
- **Reddit Search Mode:** Searches Reddit for discussions and opinions related to the query.
|
||||
- **Current Information:** Some search tools might give you outdated info because they use data from crawling bots and
|
||||
convert them into embeddings and store them in a index. Unlike them, Perplexica uses SearxNG, a metasearch engine to
|
||||
get the results and rerank and get the most relevant source out of it, ensuring you always get the latest information
|
||||
without the overhead of daily data updates.
|
||||
|
||||
It has many more features like image and video search. Some of the planned features are mentioned in [upcoming features](#upcoming-features).
|
||||
It has many more features like image and video search. Some of the planned features are mentioned
|
||||
in [upcoming features](#upcoming-features).
|
||||
|
||||
## Installation
|
||||
|
||||
|
@ -65,13 +76,16 @@ There are mainly 2 ways of installing Perplexica - With Docker, Without Docker.
|
|||
|
||||
4. Rename the `sample.config.toml` file to `config.toml`. For Docker setups, you need only fill in the following fields:
|
||||
|
||||
- `OPENAI`: Your OpenAI API key. **You only need to fill this if you wish to use OpenAI's models**.
|
||||
- `OLLAMA`: Your Ollama API URL. You should enter it as `http://host.docker.internal:PORT_NUMBER`. If you installed Ollama on port 11434, use `http://host.docker.internal:11434`. For other ports, adjust accordingly. **You need to fill this if you wish to use Ollama's models instead of OpenAI's**.
|
||||
- `GROQ`: Your Groq API key. **You only need to fill this if you wish to use Groq's hosted models**
|
||||
- `OPENAI`: Your OpenAI API key. **You only need to fill this if you wish to use OpenAI's models**.
|
||||
- `OLLAMA`: Your Ollama API URL. You should enter it as `http://host.docker.internal:PORT_NUMBER`. If you installed
|
||||
Ollama on port 11434, use `http://host.docker.internal:11434`. For other ports, adjust accordingly. **You need to
|
||||
fill this if you wish to use Ollama's models instead of OpenAI's**.
|
||||
- `GROQ`: Your Groq API key. **You only need to fill this if you wish to use Groq's hosted models**
|
||||
|
||||
**Note**: You can change these after starting Perplexica from the settings dialog.
|
||||
**Note**: You can change these after starting Perplexica from the settings dialog.
|
||||
|
||||
- `SIMILARITY_MEASURE`: The similarity measure to use (This is filled by default; you can leave it as is if you are unsure about it.)
|
||||
- `SIMILARITY_MEASURE`: The similarity measure to use (This is filled by default; you can leave it as is if you are
|
||||
unsure about it.)
|
||||
|
||||
5. Ensure you are in the directory containing the `docker-compose.yaml` file and execute:
|
||||
|
||||
|
@ -81,23 +95,29 @@ There are mainly 2 ways of installing Perplexica - With Docker, Without Docker.
|
|||
|
||||
6. Wait a few minutes for the setup to complete. You can access Perplexica at http://localhost:3000 in your web browser.
|
||||
|
||||
**Note**: After the containers are built, you can start Perplexica directly from Docker without having to open a terminal.
|
||||
**Note**: After the containers are built, you can start Perplexica directly from Docker without having to open a
|
||||
terminal.
|
||||
|
||||
### Non-Docker Installation
|
||||
|
||||
1. Clone the repository and rename the `sample.config.toml` file to `config.toml` in the root directory. Ensure you complete all required fields in this file.
|
||||
1. Clone the repository and rename the `sample.config.toml` file to `config.toml` in the root directory. Ensure you
|
||||
complete all required fields in this file.
|
||||
2. Rename the `.env.example` file to `.env` in the `ui` folder and fill in all necessary fields.
|
||||
3. After populating the configuration and environment files, run `npm i` in both the `ui` folder and the root directory.
|
||||
4. Install the dependencies and then execute `npm run build` in both the `ui` folder and the root directory.
|
||||
5. Finally, start both the frontend and the backend by running `npm run start` in both the `ui` folder and the root directory.
|
||||
5. Finally, start both the frontend and the backend by running `npm run start` in both the `ui` folder and the root
|
||||
directory.
|
||||
|
||||
**Note**: Using Docker is recommended as it simplifies the setup process, especially for managing environment variables and dependencies.
|
||||
**Note**: Using Docker is recommended as it simplifies the setup process, especially for managing environment variables
|
||||
and dependencies.
|
||||
|
||||
See the [installation documentation](https://github.com/ItzCrazyKns/Perplexica/tree/master/docs/installation) for more information like exposing it your network, etc.
|
||||
See the [installation documentation](https://github.com/ItzCrazyKns/Perplexica/tree/master/docs/installation) for more
|
||||
information like exposing it your network, etc.
|
||||
|
||||
### Ollama connection errors
|
||||
|
||||
If you're facing an Ollama connection error, it is often related to the backend not being able to connect to Ollama's API. How can you fix it? You can fix it by updating your Ollama API URL in the settings menu to the following:
|
||||
If you're facing an Ollama connection error, it is often related to the backend not being able to connect to Ollama's
|
||||
API. How can you fix it? You can fix it by updating your Ollama API URL in the settings menu to the following:
|
||||
|
||||
On Windows: `http://host.docker.internal:11434`<br>
|
||||
On Mac: `http://host.docker.internal:11434`<br>
|
||||
|
@ -107,11 +127,13 @@ You need to edit the ports accordingly.
|
|||
|
||||
## Using as a Search Engine
|
||||
|
||||
If you wish to use Perplexica as an alternative to traditional search engines like Google or Bing, or if you want to add a shortcut for quick access from your browser's search bar, follow these steps:
|
||||
If you wish to use Perplexica as an alternative to traditional search engines like Google or Bing, or if you want to add
|
||||
a shortcut for quick access from your browser's search bar, follow these steps:
|
||||
|
||||
1. Open your browser's settings.
|
||||
2. Navigate to the 'Search Engines' section.
|
||||
3. Add a new site search with the following URL: `http://localhost:3000/?q=%s`. Replace `localhost` with your IP address or domain name, and `3000` with the port number if Perplexica is not hosted locally.
|
||||
3. Add a new site search with the following URL: `http://localhost:3000/?q=%s`. Replace `localhost` with your IP address
|
||||
or domain name, and `3000` with the port number if Perplexica is not hosted locally.
|
||||
4. Click the add button. Now, you can use Perplexica directly from your browser's search bar.
|
||||
|
||||
## One-Click Deployment
|
||||
|
@ -128,11 +150,13 @@ If you wish to use Perplexica as an alternative to traditional search engines li
|
|||
|
||||
## Support Us
|
||||
|
||||
If you find Perplexica useful, consider giving us a star on GitHub. This helps more people discover Perplexica and supports the development of new features. Your support is greatly appreciated.
|
||||
If you find Perplexica useful, consider giving us a star on GitHub. This helps more people discover Perplexica and
|
||||
supports the development of new features. Your support is greatly appreciated.
|
||||
|
||||
### Donations
|
||||
|
||||
We also accept donations to help sustain our project. If you would like to contribute, you can use the following button to make a donation in cryptocurrency. Thank you for your support!
|
||||
We also accept donations to help sustain our project. If you would like to contribute, you can use the following button
|
||||
to make a donation in cryptocurrency. Thank you for your support!
|
||||
|
||||
<a href="https://nowpayments.io/donation?api_key=RFFKJH1-GRR4DQG-HFV1DZP-00G6MMK&source=lk_donation&medium=referral" target="_blank">
|
||||
<img src="https://nowpayments.io/images/embeds/donation-button-white.svg" alt="Crypto donation button by NOWPayments">
|
||||
|
@ -140,10 +164,17 @@ We also accept donations to help sustain our project. If you would like to contr
|
|||
|
||||
## Contribution
|
||||
|
||||
Perplexica is built on the idea that AI and large language models should be easy for everyone to use. If you find bugs or have ideas, please share them in via GitHub Issues. For more information on contributing to Perplexica you can read the [CONTRIBUTING.md](CONTRIBUTING.md) file to learn more about Perplexica and how you can contribute to it.
|
||||
Perplexica is built on the idea that AI and large language models should be easy for everyone to use. If you find bugs
|
||||
or have ideas, please share them in via GitHub Issues. For more information on contributing to Perplexica you can read
|
||||
the [CONTRIBUTING.md](CONTRIBUTING.md) file to learn more about Perplexica and how you can contribute to it.
|
||||
|
||||
## Help and Support
|
||||
|
||||
If you have any questions or feedback, please feel free to reach out to us. You can create an issue on GitHub or join our Discord server. There, you can connect with other users, share your experiences and reviews, and receive more personalized help. [Click here](https://discord.gg/EFwsmQDgAu) to join the Discord server. To discuss matters outside of regular support, feel free to contact me on Discord at `itzcrazykns`.
|
||||
If you have any questions or feedback, please feel free to reach out to us. You can create an issue on GitHub or join
|
||||
our Discord server. There, you can connect with other users, share your experiences and reviews, and receive more
|
||||
personalized help. [Click here](https://discord.gg/EFwsmQDgAu) to join the Discord server. To discuss matters outside of
|
||||
regular support, feel free to contact me on Discord at `itzcrazykns`.
|
||||
|
||||
Thank you for exploring Perplexica, the AI-powered search engine designed to enhance your search experience. We are constantly working to improve Perplexica and expand its capabilities. We value your feedback and contributions which help us make Perplexica even better. Don't forget to check back for updates and new features!
|
||||
Thank you for exploring Perplexica, the AI-powered search engine designed to enhance your search experience. We are
|
||||
constantly working to improve Perplexica and expand its capabilities. We value your feedback and contributions which
|
||||
help us make Perplexica even better. Don't forget to check back for updates and new features!
|
||||
|
|
|
@ -2,10 +2,15 @@
|
|||
|
||||
Perplexica's architecture consists of the following key components:
|
||||
|
||||
1. **User Interface**: A web-based interface that allows users to interact with Perplexica for searching images, videos, and much more.
|
||||
2. **Agent/Chains**: These components predict Perplexica's next actions, understand user queries, and decide whether a web search is necessary.
|
||||
1. **User Interface**: A web-based interface that allows users to interact with Perplexica for searching images, videos,
|
||||
and much more.
|
||||
2. **Agent/Chains**: These components predict Perplexica's next actions, understand user queries, and decide whether a
|
||||
web search is necessary.
|
||||
3. **SearXNG**: A metadata search engine used by Perplexica to search the web for sources.
|
||||
4. **LLMs (Large Language Models)**: Utilized by agents and chains for tasks like understanding content, writing responses, and citing sources. Examples include Claude, GPTs, etc.
|
||||
5. **Embedding Models**: To improve the accuracy of search results, embedding models re-rank the results using similarity search algorithms such as cosine similarity and dot product distance.
|
||||
4. **LLMs (Large Language Models)**: Utilized by agents and chains for tasks like understanding content, writing
|
||||
responses, and citing sources. Examples include Claude, GPTs, etc.
|
||||
5. **Embedding Models**: To improve the accuracy of search results, embedding models re-rank the results using
|
||||
similarity search algorithms such as cosine similarity and dot product distance.
|
||||
|
||||
For a more detailed explanation of how these components work together, see [WORKING.md](https://github.com/ItzCrazyKns/Perplexica/tree/master/docs/architecture/WORKING.md).
|
||||
For a more detailed explanation of how these components work together,
|
||||
see [WORKING.md](https://github.com/ItzCrazyKns/Perplexica/tree/master/docs/architecture/WORKING.md).
|
||||
|
|
|
@ -1,19 +1,31 @@
|
|||
## How does Perplexica work?
|
||||
|
||||
Curious about how Perplexica works? Don't worry, we'll cover it here. Before we begin, make sure you've read about the architecture of Perplexica to ensure you understand what it's made up of. Haven't read it? You can read it [here](https://github.com/ItzCrazyKns/Perplexica/tree/master/docs/architecture/README.md).
|
||||
Curious about how Perplexica works? Don't worry, we'll cover it here. Before we begin, make sure you've read about the
|
||||
architecture of Perplexica to ensure you understand what it's made up of. Haven't read it? You can read
|
||||
it [here](https://github.com/ItzCrazyKns/Perplexica/tree/master/docs/architecture/README.md).
|
||||
|
||||
We'll understand how Perplexica works by taking an example of a scenario where a user asks: "How does an A.C. work?". We'll break down the process into steps to make it easier to understand. The steps are as follows:
|
||||
We'll understand how Perplexica works by taking an example of a scenario where a user asks: "How does an A.C. work?".
|
||||
We'll break down the process into steps to make it easier to understand. The steps are as follows:
|
||||
|
||||
1. The message is sent via WS to the backend server where it invokes the chain. The chain will depend on your focus mode. For this example, let's assume we use the "webSearch" focus mode.
|
||||
2. The chain is now invoked; first, the message is passed to another chain where it first predicts (using the chat history and the question) whether there is a need for sources and searching the web. If there is, it will generate a query (in accordance with the chat history) for searching the web that we'll take up later. If not, the chain will end there, and then the answer generator chain, also known as the response generator, will be started.
|
||||
1. The message is sent via WS to the backend server where it invokes the chain. The chain will depend on your focus
|
||||
mode. For this example, let's assume we use the "webSearch" focus mode.
|
||||
2. The chain is now invoked; first, the message is passed to another chain where it first predicts (using the chat
|
||||
history and the question) whether there is a need for sources and searching the web. If there is, it will generate a
|
||||
query (in accordance with the chat history) for searching the web that we'll take up later. If not, the chain will
|
||||
end there, and then the answer generator chain, also known as the response generator, will be started.
|
||||
3. The query returned by the first chain is passed to SearXNG to search the web for information.
|
||||
4. After the information is retrieved, it is based on keyword-based search. We then convert the information into embeddings and the query as well, then we perform a similarity search to find the most relevant sources to answer the query.
|
||||
5. After all this is done, the sources are passed to the response generator. This chain takes all the chat history, the query, and the sources. It generates a response that is streamed to the UI.
|
||||
4. After the information is retrieved, it is based on keyword-based search. We then convert the information into
|
||||
embeddings and the query as well, then we perform a similarity search to find the most relevant sources to answer the
|
||||
query.
|
||||
5. After all this is done, the sources are passed to the response generator. This chain takes all the chat history, the
|
||||
query, and the sources. It generates a response that is streamed to the UI.
|
||||
|
||||
### How are the answers cited?
|
||||
|
||||
The LLMs are prompted to do so. We've prompted them so well that they cite the answers themselves, and using some UI magic, we display it to the user.
|
||||
The LLMs are prompted to do so. We've prompted them so well that they cite the answers themselves, and using some UI
|
||||
magic, we display it to the user.
|
||||
|
||||
### Image and Video Search
|
||||
|
||||
Image and video searches are conducted in a similar manner. A query is always generated first, then we search the web for images and videos that match the query. These results are then returned to the user.
|
||||
Image and video searches are conducted in a similar manner. A query is always generated first, then we search the web
|
||||
for images and videos that match the query. These results are then returned to the user.
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
# Expose Perplexica to a network
|
||||
|
||||
This guide will show you how to make Perplexica available over a network. Follow these steps to allow computers on the same network to interact with Perplexica. Choose the instructions that match the operating system you are using.
|
||||
This guide will show you how to make Perplexica available over a network. Follow these steps to allow computers on the
|
||||
same network to interact with Perplexica. Choose the instructions that match the operating system you are using.
|
||||
|
||||
## Windows
|
||||
|
||||
|
|
|
@ -32,12 +32,12 @@ search:
|
|||
# Existing autocomplete backends: "dbpedia", "duckduckgo", "google", "yandex", "mwmbl",
|
||||
# "seznam", "startpage", "stract", "swisscows", "qwant", "wikipedia" - leave blank to turn it off
|
||||
# by default.
|
||||
autocomplete: 'google'
|
||||
autocomplete: 'duckduckgo'
|
||||
# minimun characters to type before autocompleter starts
|
||||
autocomplete_min: 4
|
||||
autocomplete_min: 3
|
||||
# Default search language - leave blank to detect from browser information or
|
||||
# use codes from 'languages.py'
|
||||
default_lang: 'auto'
|
||||
default_lang: 'en-US'
|
||||
# max_page: 0 # if engine supports paging, 0 means unlimited numbers of pages
|
||||
# Available languages
|
||||
# languages:
|
||||
|
@ -213,15 +213,15 @@ outgoing:
|
|||
|
||||
# Comment or un-comment plugin to activate / deactivate by default.
|
||||
#
|
||||
# enabled_plugins:
|
||||
# # these plugins are enabled if nothing is configured ..
|
||||
# - 'Hash plugin'
|
||||
# - 'Self Information'
|
||||
# - 'Tracker URL remover'
|
||||
# - 'Ahmia blacklist' # activation depends on outgoing.using_tor_proxy
|
||||
# # these plugins are disabled if nothing is configured ..
|
||||
# - 'Hostname replace' # see hostname_replace configuration below
|
||||
# - 'Open Access DOI rewrite'
|
||||
enabled_plugins:
|
||||
# # these plugins are enabled if nothing is configured ..
|
||||
# - 'Hash plugin'
|
||||
# - 'Self Information'
|
||||
# - 'Tracker URL remover'
|
||||
# - 'Ahmia blacklist' # activation depends on outgoing.using_tor_proxy
|
||||
# # these plugins are disabled if nothing is configured ..
|
||||
# - 'Hostname replace' # see hostname_replace configuration below
|
||||
- 'Open Access DOI rewrite'
|
||||
# - 'Tor check plugin'
|
||||
# # Read the docs before activate: auto-detection of the language could be
|
||||
# # detrimental to users expectations / users can activate the plugin in the
|
||||
|
@ -265,17 +265,17 @@ checker:
|
|||
lang: en
|
||||
result_container:
|
||||
- not_empty
|
||||
- ['one_title_contains', 'citizen kane']
|
||||
- [ 'one_title_contains', 'citizen kane' ]
|
||||
test:
|
||||
- unique_results
|
||||
|
||||
android: &test_android
|
||||
matrix:
|
||||
query: ['android']
|
||||
lang: ['en', 'de', 'fr', 'zh-CN']
|
||||
query: [ 'android' ]
|
||||
lang: [ 'en', 'de', 'fr', 'zh-CN' ]
|
||||
result_container:
|
||||
- not_empty
|
||||
- ['one_title_contains', 'google']
|
||||
- [ 'one_title_contains', 'google' ]
|
||||
test:
|
||||
- unique_results
|
||||
|
||||
|
@ -284,7 +284,7 @@ checker:
|
|||
infobox: &tests_infobox
|
||||
infobox:
|
||||
matrix:
|
||||
query: ['linux', 'new york', 'bbc']
|
||||
query: [ 'linux', 'new york', 'bbc' ]
|
||||
result_container:
|
||||
- has_infobox
|
||||
|
||||
|
@ -384,9 +384,9 @@ engines:
|
|||
engine: wikipedia
|
||||
shortcut: wp
|
||||
# add "list" to the array to get results in the results list
|
||||
display_type: ['infobox']
|
||||
display_type: [ 'infobox' ]
|
||||
base_url: 'https://{language}.wikipedia.org/'
|
||||
categories: [general]
|
||||
categories: [ general ]
|
||||
|
||||
- name: bilibili
|
||||
engine: bilibili
|
||||
|
@ -417,7 +417,7 @@ engines:
|
|||
url_xpath: //article[@class="repo-summary"]//a[@class="repo-link"]/@href
|
||||
title_xpath: //article[@class="repo-summary"]//a[@class="repo-link"]
|
||||
content_xpath: //article[@class="repo-summary"]/p
|
||||
categories: [it, repos]
|
||||
categories: [ it, repos ]
|
||||
timeout: 4.0
|
||||
disabled: true
|
||||
shortcut: bb
|
||||
|
@ -593,7 +593,7 @@ engines:
|
|||
- name: docker hub
|
||||
engine: docker_hub
|
||||
shortcut: dh
|
||||
categories: [it, packages]
|
||||
categories: [ it, packages ]
|
||||
|
||||
- name: erowid
|
||||
engine: xpath
|
||||
|
@ -604,7 +604,7 @@ engines:
|
|||
url_xpath: //dl[@class="results-list"]/dt[@class="result-title"]/a/@href
|
||||
title_xpath: //dl[@class="results-list"]/dt[@class="result-title"]/a/text()
|
||||
content_xpath: //dl[@class="results-list"]/dd[@class="result-details"]
|
||||
categories: []
|
||||
categories: [ ]
|
||||
shortcut: ew
|
||||
disabled: true
|
||||
about:
|
||||
|
@ -635,31 +635,32 @@ engines:
|
|||
timeout: 3.0
|
||||
weight: 2
|
||||
# add "list" to the array to get results in the results list
|
||||
display_type: ['infobox']
|
||||
display_type: [ 'infobox' ]
|
||||
tests: *tests_infobox
|
||||
categories: [general]
|
||||
categories: [ general ]
|
||||
|
||||
- name: duckduckgo
|
||||
engine: duckduckgo
|
||||
shortcut: ddg
|
||||
weight: 2.0
|
||||
|
||||
- name: duckduckgo images
|
||||
engine: duckduckgo_extra
|
||||
categories: [images, web]
|
||||
categories: [ images, web ]
|
||||
ddg_category: images
|
||||
shortcut: ddi
|
||||
disabled: true
|
||||
|
||||
- name: duckduckgo videos
|
||||
engine: duckduckgo_extra
|
||||
categories: [videos, web]
|
||||
categories: [ videos, web ]
|
||||
ddg_category: videos
|
||||
shortcut: ddv
|
||||
disabled: true
|
||||
|
||||
- name: duckduckgo news
|
||||
engine: duckduckgo_extra
|
||||
categories: [news, web]
|
||||
categories: [ news, web ]
|
||||
ddg_category: news
|
||||
shortcut: ddn
|
||||
disabled: true
|
||||
|
@ -696,7 +697,7 @@ engines:
|
|||
content_xpath: //section[contains(@class, "word__defination")]
|
||||
first_page_num: 1
|
||||
shortcut: et
|
||||
categories: [dictionaries]
|
||||
categories: [ dictionaries ]
|
||||
about:
|
||||
website: https://www.etymonline.com/
|
||||
wikidata_id: Q1188617
|
||||
|
@ -736,7 +737,7 @@ engines:
|
|||
- name: free software directory
|
||||
engine: mediawiki
|
||||
shortcut: fsd
|
||||
categories: [it, software wikis]
|
||||
categories: [ it, software wikis ]
|
||||
base_url: https://directory.fsf.org/
|
||||
search_type: title
|
||||
timeout: 5.0
|
||||
|
@ -781,7 +782,7 @@ engines:
|
|||
title_query: name_with_namespace
|
||||
content_query: description
|
||||
page_size: 20
|
||||
categories: [it, repos]
|
||||
categories: [ it, repos ]
|
||||
shortcut: gl
|
||||
timeout: 10.0
|
||||
disabled: true
|
||||
|
@ -807,7 +808,7 @@ engines:
|
|||
url_query: html_url
|
||||
title_query: name
|
||||
content_query: description
|
||||
categories: [it, repos]
|
||||
categories: [ it, repos ]
|
||||
shortcut: cb
|
||||
disabled: true
|
||||
about:
|
||||
|
@ -860,7 +861,7 @@ engines:
|
|||
|
||||
- name: google play apps
|
||||
engine: google_play
|
||||
categories: [files, apps]
|
||||
categories: [ files, apps ]
|
||||
shortcut: gpa
|
||||
play_categ: apps
|
||||
disabled: true
|
||||
|
@ -932,7 +933,7 @@ engines:
|
|||
url_xpath: './/div[@class="ans"]//a/@href'
|
||||
content_xpath: './/div[@class="from"]'
|
||||
page_size: 20
|
||||
categories: [it, packages]
|
||||
categories: [ it, packages ]
|
||||
shortcut: ho
|
||||
about:
|
||||
website: https://hoogle.haskell.org/
|
||||
|
@ -1093,7 +1094,7 @@ engines:
|
|||
- name: mdn
|
||||
shortcut: mdn
|
||||
engine: json_engine
|
||||
categories: [it]
|
||||
categories: [ it ]
|
||||
paging: true
|
||||
search_url: https://developer.mozilla.org/api/v1/search?q={query}&page={pageno}
|
||||
results_query: documents
|
||||
|
@ -1167,7 +1168,7 @@ engines:
|
|||
title_query: package/name
|
||||
content_query: package/description
|
||||
page_size: 25
|
||||
categories: [it, packages]
|
||||
categories: [ it, packages ]
|
||||
disabled: true
|
||||
timeout: 5.0
|
||||
shortcut: npm
|
||||
|
@ -1281,7 +1282,7 @@ engines:
|
|||
url_query: url
|
||||
title_query: name
|
||||
content_query: description
|
||||
categories: [it, packages]
|
||||
categories: [ it, packages ]
|
||||
disabled: true
|
||||
timeout: 5.0
|
||||
shortcut: pack
|
||||
|
@ -1355,7 +1356,7 @@ engines:
|
|||
- name: presearch
|
||||
engine: presearch
|
||||
search_type: search
|
||||
categories: [general, web]
|
||||
categories: [ general, web ]
|
||||
shortcut: ps
|
||||
timeout: 4.0
|
||||
disabled: true
|
||||
|
@ -1364,7 +1365,7 @@ engines:
|
|||
engine: presearch
|
||||
network: presearch
|
||||
search_type: images
|
||||
categories: [images, web]
|
||||
categories: [ images, web ]
|
||||
timeout: 4.0
|
||||
shortcut: psimg
|
||||
disabled: true
|
||||
|
@ -1373,7 +1374,7 @@ engines:
|
|||
engine: presearch
|
||||
network: presearch
|
||||
search_type: videos
|
||||
categories: [general, web]
|
||||
categories: [ general, web ]
|
||||
timeout: 4.0
|
||||
shortcut: psvid
|
||||
disabled: true
|
||||
|
@ -1382,7 +1383,7 @@ engines:
|
|||
engine: presearch
|
||||
network: presearch
|
||||
search_type: news
|
||||
categories: [news, web]
|
||||
categories: [ news, web ]
|
||||
timeout: 4.0
|
||||
shortcut: psnews
|
||||
disabled: true
|
||||
|
@ -1396,7 +1397,7 @@ engines:
|
|||
url_xpath: ./div/h3/a/@href
|
||||
title_xpath: ./div/h3/a
|
||||
content_xpath: ./div/div/div[contains(@class,"packages-description")]/span
|
||||
categories: [packages, it]
|
||||
categories: [ packages, it ]
|
||||
timeout: 3.0
|
||||
disabled: true
|
||||
first_page_num: 1
|
||||
|
@ -1423,7 +1424,7 @@ engines:
|
|||
content_xpath: ./p
|
||||
suggestion_xpath: /html/body/main/div/div/div/form/div/div[@class="callout-block"]/p/span/a[@class="link"]
|
||||
first_page_num: 1
|
||||
categories: [it, packages]
|
||||
categories: [ it, packages ]
|
||||
about:
|
||||
website: https://pypi.org
|
||||
wikidata_id: Q2984686
|
||||
|
@ -1436,7 +1437,7 @@ engines:
|
|||
qwant_categ: web
|
||||
engine: qwant
|
||||
shortcut: qw
|
||||
categories: [general, web]
|
||||
categories: [ general, web ]
|
||||
additional_tests:
|
||||
rosebud: *test_rosebud
|
||||
|
||||
|
@ -1451,14 +1452,14 @@ engines:
|
|||
qwant_categ: images
|
||||
engine: qwant
|
||||
shortcut: qwi
|
||||
categories: [images, web]
|
||||
categories: [ images, web ]
|
||||
network: qwant
|
||||
|
||||
- name: qwant videos
|
||||
qwant_categ: videos
|
||||
engine: qwant
|
||||
shortcut: qwv
|
||||
categories: [videos, web]
|
||||
categories: [ videos, web ]
|
||||
network: qwant
|
||||
|
||||
# - name: library
|
||||
|
@ -1526,13 +1527,13 @@ engines:
|
|||
engine: stackexchange
|
||||
shortcut: st
|
||||
api_site: 'stackoverflow'
|
||||
categories: [it, q&a]
|
||||
categories: [ it, q&a ]
|
||||
|
||||
- name: askubuntu
|
||||
engine: stackexchange
|
||||
shortcut: ubuntu
|
||||
api_site: 'askubuntu'
|
||||
categories: [it, q&a]
|
||||
categories: [ it, q&a ]
|
||||
|
||||
- name: internetarchivescholar
|
||||
engine: internet_archive_scholar
|
||||
|
@ -1543,7 +1544,7 @@ engines:
|
|||
engine: stackexchange
|
||||
shortcut: su
|
||||
api_site: 'superuser'
|
||||
categories: [it, q&a]
|
||||
categories: [ it, q&a ]
|
||||
|
||||
- name: searchcode code
|
||||
engine: searchcode_code
|
||||
|
@ -1737,7 +1738,7 @@ engines:
|
|||
url_query: URL
|
||||
title_query: Title
|
||||
content_query: Snippet
|
||||
categories: [general, web]
|
||||
categories: [ general, web ]
|
||||
shortcut: wib
|
||||
disabled: true
|
||||
about:
|
||||
|
@ -1766,7 +1767,7 @@ engines:
|
|||
engine: mediawiki
|
||||
weight: 0.5
|
||||
shortcut: wb
|
||||
categories: [general, wikimedia]
|
||||
categories: [ general, wikimedia ]
|
||||
base_url: 'https://{language}.wikibooks.org/'
|
||||
search_type: text
|
||||
disabled: true
|
||||
|
@ -1777,7 +1778,7 @@ engines:
|
|||
- name: wikinews
|
||||
engine: mediawiki
|
||||
shortcut: wn
|
||||
categories: [news, wikimedia]
|
||||
categories: [ news, wikimedia ]
|
||||
base_url: 'https://{language}.wikinews.org/'
|
||||
search_type: text
|
||||
srsort: create_timestamp_desc
|
||||
|
@ -1789,7 +1790,7 @@ engines:
|
|||
engine: mediawiki
|
||||
weight: 0.5
|
||||
shortcut: wq
|
||||
categories: [general, wikimedia]
|
||||
categories: [ general, wikimedia ]
|
||||
base_url: 'https://{language}.wikiquote.org/'
|
||||
search_type: text
|
||||
disabled: true
|
||||
|
@ -1803,7 +1804,7 @@ engines:
|
|||
engine: mediawiki
|
||||
weight: 0.5
|
||||
shortcut: ws
|
||||
categories: [general, wikimedia]
|
||||
categories: [ general, wikimedia ]
|
||||
base_url: 'https://{language}.wikisource.org/'
|
||||
search_type: text
|
||||
disabled: true
|
||||
|
@ -1814,7 +1815,7 @@ engines:
|
|||
- name: wikispecies
|
||||
engine: mediawiki
|
||||
shortcut: wsp
|
||||
categories: [general, science, wikimedia]
|
||||
categories: [ general, science, wikimedia ]
|
||||
base_url: 'https://species.wikimedia.org/'
|
||||
search_type: text
|
||||
disabled: true
|
||||
|
@ -1825,7 +1826,7 @@ engines:
|
|||
- name: wiktionary
|
||||
engine: mediawiki
|
||||
shortcut: wt
|
||||
categories: [dictionaries, wikimedia]
|
||||
categories: [ dictionaries, wikimedia ]
|
||||
base_url: 'https://{language}.wiktionary.org/'
|
||||
search_type: text
|
||||
about:
|
||||
|
@ -1836,7 +1837,7 @@ engines:
|
|||
engine: mediawiki
|
||||
weight: 0.5
|
||||
shortcut: wv
|
||||
categories: [general, wikimedia]
|
||||
categories: [ general, wikimedia ]
|
||||
base_url: 'https://{language}.wikiversity.org/'
|
||||
search_type: text
|
||||
disabled: true
|
||||
|
@ -1848,7 +1849,7 @@ engines:
|
|||
engine: mediawiki
|
||||
weight: 0.5
|
||||
shortcut: wy
|
||||
categories: [general, wikimedia]
|
||||
categories: [ general, wikimedia ]
|
||||
base_url: 'https://{language}.wikivoyage.org/'
|
||||
search_type: text
|
||||
disabled: true
|
||||
|
@ -1926,7 +1927,7 @@ engines:
|
|||
shortcut: mjk
|
||||
engine: xpath
|
||||
paging: true
|
||||
categories: [general, web]
|
||||
categories: [ general, web ]
|
||||
search_url: https://www.mojeek.com/search?q={query}&s={pageno}&lang={lang}&lb={lang}
|
||||
results_xpath: //ul[@class="results-standard"]/li/a[@class="ob"]
|
||||
url_xpath: ./@href
|
||||
|
@ -1952,7 +1953,7 @@ engines:
|
|||
|
||||
- name: naver
|
||||
shortcut: nvr
|
||||
categories: [general, web]
|
||||
categories: [ general, web ]
|
||||
engine: xpath
|
||||
paging: true
|
||||
search_url: https://search.naver.com/search.naver?where=webkr&sm=osp_hty&ie=UTF-8&query={query}&start={pageno}
|
||||
|
@ -1982,7 +1983,7 @@ engines:
|
|||
content_xpath: ./span/p
|
||||
suggestion_xpath: /html/body/main/div/div[@class="search__suggestions"]/p/a
|
||||
first_page_num: 1
|
||||
categories: [it, packages]
|
||||
categories: [ it, packages ]
|
||||
disabled: true
|
||||
about:
|
||||
website: https://rubygems.org/
|
||||
|
@ -2047,20 +2048,20 @@ engines:
|
|||
engine: wordnik
|
||||
shortcut: def
|
||||
base_url: https://www.wordnik.com/
|
||||
categories: [dictionaries]
|
||||
categories: [ dictionaries ]
|
||||
timeout: 5.0
|
||||
|
||||
- name: woxikon.de synonyme
|
||||
engine: xpath
|
||||
shortcut: woxi
|
||||
categories: [dictionaries]
|
||||
categories: [ dictionaries ]
|
||||
timeout: 5.0
|
||||
disabled: true
|
||||
search_url: https://synonyme.woxikon.de/synonyme/{query}.php
|
||||
url_xpath: //div[@class="upper-synonyms"]/a/@href
|
||||
content_xpath: //div[@class="synonyms-list-group"]
|
||||
title_xpath: //div[@class="upper-synonyms"]/a
|
||||
no_result_for_http_status: [404]
|
||||
no_result_for_http_status: [ 404 ]
|
||||
about:
|
||||
website: https://www.woxikon.de/
|
||||
wikidata_id: # No Wikidata ID
|
||||
|
@ -2154,7 +2155,7 @@ engines:
|
|||
shortcut: br
|
||||
time_range_support: true
|
||||
paging: true
|
||||
categories: [general, web]
|
||||
categories: [ general, web ]
|
||||
brave_category: search
|
||||
# brave_spellcheck: true
|
||||
|
||||
|
@ -2162,14 +2163,14 @@ engines:
|
|||
engine: brave
|
||||
network: brave
|
||||
shortcut: brimg
|
||||
categories: [images, web]
|
||||
categories: [ images, web ]
|
||||
brave_category: images
|
||||
|
||||
- name: brave.videos
|
||||
engine: brave
|
||||
network: brave
|
||||
shortcut: brvid
|
||||
categories: [videos, web]
|
||||
categories: [ videos, web ]
|
||||
brave_category: videos
|
||||
|
||||
- name: brave.news
|
||||
|
@ -2197,7 +2198,7 @@ engines:
|
|||
url_xpath: ./@href
|
||||
title_xpath: ./div[@class="h"]/h4
|
||||
content_xpath: ./div[@class="h"]/p
|
||||
categories: [it, packages]
|
||||
categories: [ it, packages ]
|
||||
disabled: true
|
||||
about:
|
||||
website: https://lib.rs
|
||||
|
@ -2216,7 +2217,7 @@ engines:
|
|||
title_xpath: ./h4/a[2]
|
||||
content_xpath: ./p
|
||||
first_page_num: 1
|
||||
categories: [it, repos]
|
||||
categories: [ it, repos ]
|
||||
disabled: true
|
||||
about:
|
||||
website: https://sr.ht
|
||||
|
@ -2235,7 +2236,7 @@ engines:
|
|||
title_xpath: //div[@class="result"]/p[@class='title fsL1']/a
|
||||
content_xpath: //p[contains(@class,'url fsM')]/following-sibling::p
|
||||
first_page_num: 0
|
||||
categories: [general, web]
|
||||
categories: [ general, web ]
|
||||
disabled: true
|
||||
timeout: 4.0
|
||||
about:
|
||||
|
@ -2258,7 +2259,7 @@ engines:
|
|||
url_xpath: ./div[@class="SearchSnippet-headerContainer"]/h2/a/@href
|
||||
title_xpath: ./div[@class="SearchSnippet-headerContainer"]/h2/a
|
||||
content_xpath: ./p[@class="SearchSnippet-synopsis"]
|
||||
categories: [packages, it]
|
||||
categories: [ packages, it ]
|
||||
timeout: 3.0
|
||||
disabled: true
|
||||
about:
|
||||
|
|
|
@ -1,13 +1,13 @@
|
|||
import { BaseMessage } from '@langchain/core/messages';
|
||||
import {
|
||||
PromptTemplate,
|
||||
ChatPromptTemplate,
|
||||
MessagesPlaceholder,
|
||||
PromptTemplate,
|
||||
} from '@langchain/core/prompts';
|
||||
import {
|
||||
RunnableSequence,
|
||||
RunnableMap,
|
||||
RunnableLambda,
|
||||
RunnableMap,
|
||||
RunnableSequence,
|
||||
} from '@langchain/core/runnables';
|
||||
import { StringOutputParser } from '@langchain/core/output_parsers';
|
||||
import { Document } from '@langchain/core/documents';
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
import {
|
||||
RunnableSequence,
|
||||
RunnableMap,
|
||||
RunnableLambda,
|
||||
RunnableMap,
|
||||
RunnableSequence,
|
||||
} from '@langchain/core/runnables';
|
||||
import { PromptTemplate } from '@langchain/core/prompts';
|
||||
import formatChatHistoryAsString from '../utils/formatHistory';
|
||||
|
|
|
@ -1,13 +1,13 @@
|
|||
import { BaseMessage } from '@langchain/core/messages';
|
||||
import {
|
||||
PromptTemplate,
|
||||
ChatPromptTemplate,
|
||||
MessagesPlaceholder,
|
||||
PromptTemplate,
|
||||
} from '@langchain/core/prompts';
|
||||
import {
|
||||
RunnableSequence,
|
||||
RunnableMap,
|
||||
RunnableLambda,
|
||||
RunnableMap,
|
||||
RunnableSequence,
|
||||
} from '@langchain/core/runnables';
|
||||
import { StringOutputParser } from '@langchain/core/output_parsers';
|
||||
import { Document } from '@langchain/core/documents';
|
||||
|
|
|
@ -1,10 +1,10 @@
|
|||
import {RunnableMap, RunnableSequence} from '@langchain/core/runnables';
|
||||
import { RunnableMap, RunnableSequence } from '@langchain/core/runnables';
|
||||
import ListLineOutputParser from '../lib/outputParsers/listLineOutputParser';
|
||||
import {PromptTemplate} from '@langchain/core/prompts';
|
||||
import { PromptTemplate } from '@langchain/core/prompts';
|
||||
import formatChatHistoryAsString from '../utils/formatHistory';
|
||||
import {BaseMessage} from '@langchain/core/messages';
|
||||
import {BaseChatModel} from '@langchain/core/language_models/chat_models';
|
||||
import {ChatOpenAI} from '@langchain/openai';
|
||||
import { BaseMessage } from '@langchain/core/messages';
|
||||
import { BaseChatModel } from '@langchain/core/language_models/chat_models';
|
||||
import { ChatOpenAI } from '@langchain/openai';
|
||||
|
||||
const suggestionGeneratorPrompt = `
|
||||
You are an AI suggestion generator for an AI powered search engine. You will be given a conversation below. You need to generate 4-5 suggestions based on the conversation. The suggestion should be relevant to the conversation that can be used by the user to ask the chat model for more information.
|
||||
|
@ -48,7 +48,9 @@ const generateSuggestions = (
|
|||
llm: ChatOpenAI,
|
||||
) => {
|
||||
llm.temperature = 0;
|
||||
const suggestionGeneratorChain = createSuggestionGeneratorChain(llm as unknown as BaseChatModel);
|
||||
const suggestionGeneratorChain = createSuggestionGeneratorChain(
|
||||
llm as unknown as BaseChatModel,
|
||||
);
|
||||
return suggestionGeneratorChain.invoke(input);
|
||||
};
|
||||
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
import {
|
||||
RunnableSequence,
|
||||
RunnableMap,
|
||||
RunnableLambda,
|
||||
RunnableMap,
|
||||
RunnableSequence,
|
||||
} from '@langchain/core/runnables';
|
||||
import { PromptTemplate } from '@langchain/core/prompts';
|
||||
import formatChatHistoryAsString from '../utils/formatHistory';
|
||||
|
|
|
@ -1,13 +1,13 @@
|
|||
import { BaseMessage } from '@langchain/core/messages';
|
||||
import {
|
||||
PromptTemplate,
|
||||
ChatPromptTemplate,
|
||||
MessagesPlaceholder,
|
||||
PromptTemplate,
|
||||
} from '@langchain/core/prompts';
|
||||
import {
|
||||
RunnableSequence,
|
||||
RunnableMap,
|
||||
RunnableLambda,
|
||||
RunnableMap,
|
||||
RunnableSequence,
|
||||
} from '@langchain/core/runnables';
|
||||
import { StringOutputParser } from '@langchain/core/output_parsers';
|
||||
import { Document } from '@langchain/core/documents';
|
||||
|
|
|
@ -1,13 +1,13 @@
|
|||
import { BaseMessage } from '@langchain/core/messages';
|
||||
import {
|
||||
PromptTemplate,
|
||||
ChatPromptTemplate,
|
||||
MessagesPlaceholder,
|
||||
PromptTemplate,
|
||||
} from '@langchain/core/prompts';
|
||||
import {
|
||||
RunnableSequence,
|
||||
RunnableMap,
|
||||
RunnableLambda,
|
||||
RunnableMap,
|
||||
RunnableSequence,
|
||||
} from '@langchain/core/runnables';
|
||||
import { StringOutputParser } from '@langchain/core/output_parsers';
|
||||
import { Document } from '@langchain/core/documents';
|
||||
|
|
|
@ -1,13 +1,13 @@
|
|||
import { BaseMessage } from '@langchain/core/messages';
|
||||
import {
|
||||
PromptTemplate,
|
||||
ChatPromptTemplate,
|
||||
MessagesPlaceholder,
|
||||
PromptTemplate,
|
||||
} from '@langchain/core/prompts';
|
||||
import {
|
||||
RunnableSequence,
|
||||
RunnableMap,
|
||||
RunnableLambda,
|
||||
RunnableMap,
|
||||
RunnableSequence,
|
||||
} from '@langchain/core/runnables';
|
||||
import { StringOutputParser } from '@langchain/core/output_parsers';
|
||||
import { Document } from '@langchain/core/documents';
|
||||
|
|
|
@ -5,6 +5,7 @@ interface LineListOutputParserArgs {
|
|||
}
|
||||
|
||||
class LineListOutputParser extends BaseOutputParser<string[]> {
|
||||
lc_namespace = ['langchain', 'output_parsers', 'line_list_output_parser'];
|
||||
private key = 'questions';
|
||||
|
||||
constructor(args?: LineListOutputParserArgs) {
|
||||
|
@ -16,8 +17,6 @@ class LineListOutputParser extends BaseOutputParser<string[]> {
|
|||
return 'LineListOutputParser';
|
||||
}
|
||||
|
||||
lc_namespace = ['langchain', 'output_parsers', 'line_list_output_parser'];
|
||||
|
||||
async parse(text: string): Promise<string[]> {
|
||||
const regex = /^(\s*(-|\*|\d+\.\s|\d+\)\s|\u2022)\s*)+/;
|
||||
const startKeyIndex = text.indexOf(`<${this.key}>`);
|
||||
|
|
|
@ -1,236 +1,241 @@
|
|||
import {ChatOpenAI, OpenAIEmbeddings} from '@langchain/openai';
|
||||
import {ChatOllama} from '@langchain/community/chat_models/ollama';
|
||||
import {OllamaEmbeddings} from '@langchain/community/embeddings/ollama';
|
||||
import {HuggingFaceTransformersEmbeddings} from './huggingfaceTransformer';
|
||||
import { ChatOpenAI, OpenAIEmbeddings } from '@langchain/openai';
|
||||
import { ChatOllama } from '@langchain/community/chat_models/ollama';
|
||||
import { OllamaEmbeddings } from '@langchain/community/embeddings/ollama';
|
||||
import { HuggingFaceTransformersEmbeddings } from './huggingfaceTransformer';
|
||||
import {
|
||||
getCustomEmbeddingModels,
|
||||
getCustomModels,
|
||||
getGroqApiKey,
|
||||
getOllamaApiEndpoint,
|
||||
getOpenaiApiKey,
|
||||
getCustomEmbeddingModels,
|
||||
getCustomModels,
|
||||
getGroqApiKey,
|
||||
getOllamaApiEndpoint,
|
||||
getOpenaiApiKey,
|
||||
} from '../config';
|
||||
import logger from '../utils/logger';
|
||||
|
||||
export const getAvailableChatModelProviders = async () => {
|
||||
const openAIApiKey = getOpenaiApiKey();
|
||||
const groqApiKey = getGroqApiKey();
|
||||
const ollamaEndpoint = getOllamaApiEndpoint();
|
||||
const customModels = getCustomModels();
|
||||
const openAIApiKey = getOpenaiApiKey();
|
||||
const groqApiKey = getGroqApiKey();
|
||||
const ollamaEndpoint = getOllamaApiEndpoint();
|
||||
const customModels = getCustomModels();
|
||||
|
||||
const models = {};
|
||||
const models = {};
|
||||
|
||||
if (openAIApiKey) {
|
||||
try {
|
||||
models['openai'] = {
|
||||
'GPT-3.5 turbo': new ChatOpenAI({
|
||||
openAIApiKey,
|
||||
modelName: 'gpt-3.5-turbo',
|
||||
temperature: 0.7,
|
||||
}),
|
||||
'GPT-4': new ChatOpenAI({
|
||||
openAIApiKey,
|
||||
modelName: 'gpt-4',
|
||||
temperature: 0.7,
|
||||
}),
|
||||
'GPT-4 turbo': new ChatOpenAI({
|
||||
openAIApiKey,
|
||||
modelName: 'gpt-4-turbo',
|
||||
temperature: 0.7,
|
||||
}),
|
||||
'GPT-4 omni': new ChatOpenAI({
|
||||
openAIApiKey,
|
||||
modelName: 'gpt-4o',
|
||||
temperature: 0.7,
|
||||
}),
|
||||
};
|
||||
} catch (err) {
|
||||
logger.error(`Error loading OpenAI models: ${err}`);
|
||||
}
|
||||
if (openAIApiKey) {
|
||||
try {
|
||||
models['openai'] = {
|
||||
'GPT-3.5 turbo': new ChatOpenAI({
|
||||
openAIApiKey,
|
||||
modelName: 'gpt-3.5-turbo',
|
||||
temperature: 0.7,
|
||||
}),
|
||||
'GPT-4': new ChatOpenAI({
|
||||
openAIApiKey,
|
||||
modelName: 'gpt-4',
|
||||
temperature: 0.7,
|
||||
}),
|
||||
'GPT-4 turbo': new ChatOpenAI({
|
||||
openAIApiKey,
|
||||
modelName: 'gpt-4-turbo',
|
||||
temperature: 0.7,
|
||||
}),
|
||||
'GPT-4 omni': new ChatOpenAI({
|
||||
openAIApiKey,
|
||||
modelName: 'gpt-4o',
|
||||
temperature: 0.7,
|
||||
}),
|
||||
};
|
||||
} catch (err) {
|
||||
logger.error(`Error loading OpenAI models: ${err}`);
|
||||
}
|
||||
}
|
||||
|
||||
if (groqApiKey) {
|
||||
try {
|
||||
models['groq'] = {
|
||||
'LLaMA3 8b': new ChatOpenAI(
|
||||
{
|
||||
openAIApiKey: groqApiKey,
|
||||
modelName: 'llama3-8b-8192',
|
||||
temperature: 0.7,
|
||||
},
|
||||
{
|
||||
baseURL: 'https://api.groq.com/openai/v1',
|
||||
},
|
||||
),
|
||||
'LLaMA3 70b': new ChatOpenAI(
|
||||
{
|
||||
openAIApiKey: groqApiKey,
|
||||
modelName: 'llama3-70b-8192',
|
||||
temperature: 0.7,
|
||||
},
|
||||
{
|
||||
baseURL: 'https://api.groq.com/openai/v1',
|
||||
},
|
||||
),
|
||||
'Mixtral 8x7b': new ChatOpenAI(
|
||||
{
|
||||
openAIApiKey: groqApiKey,
|
||||
modelName: 'mixtral-8x7b-32768',
|
||||
temperature: 0.7,
|
||||
},
|
||||
{
|
||||
baseURL: 'https://api.groq.com/openai/v1',
|
||||
},
|
||||
),
|
||||
'Gemma 7b': new ChatOpenAI(
|
||||
{
|
||||
openAIApiKey: groqApiKey,
|
||||
modelName: 'gemma-7b-it',
|
||||
temperature: 0.7,
|
||||
},
|
||||
{
|
||||
baseURL: 'https://api.groq.com/openai/v1',
|
||||
},
|
||||
),
|
||||
};
|
||||
} catch (err) {
|
||||
logger.error(`Error loading Groq models: ${err}`);
|
||||
}
|
||||
if (groqApiKey) {
|
||||
try {
|
||||
models['groq'] = {
|
||||
'LLaMA3 8b': new ChatOpenAI(
|
||||
{
|
||||
openAIApiKey: groqApiKey,
|
||||
modelName: 'llama3-8b-8192',
|
||||
temperature: 0.7,
|
||||
},
|
||||
{
|
||||
baseURL: 'https://api.groq.com/openai/v1',
|
||||
},
|
||||
),
|
||||
'LLaMA3 70b': new ChatOpenAI(
|
||||
{
|
||||
openAIApiKey: groqApiKey,
|
||||
modelName: 'llama3-70b-8192',
|
||||
temperature: 0.7,
|
||||
},
|
||||
{
|
||||
baseURL: 'https://api.groq.com/openai/v1',
|
||||
},
|
||||
),
|
||||
'Mixtral 8x7b': new ChatOpenAI(
|
||||
{
|
||||
openAIApiKey: groqApiKey,
|
||||
modelName: 'mixtral-8x7b-32768',
|
||||
temperature: 0.7,
|
||||
},
|
||||
{
|
||||
baseURL: 'https://api.groq.com/openai/v1',
|
||||
},
|
||||
),
|
||||
'Gemma 7b': new ChatOpenAI(
|
||||
{
|
||||
openAIApiKey: groqApiKey,
|
||||
modelName: 'gemma-7b-it',
|
||||
temperature: 0.7,
|
||||
},
|
||||
{
|
||||
baseURL: 'https://api.groq.com/openai/v1',
|
||||
},
|
||||
),
|
||||
};
|
||||
} catch (err) {
|
||||
logger.error(`Error loading Groq models: ${err}`);
|
||||
}
|
||||
}
|
||||
|
||||
if (ollamaEndpoint) {
|
||||
try {
|
||||
const response = await fetch(`${ollamaEndpoint}/api/tags`, {
|
||||
headers: {
|
||||
'Content-Type': 'application/json',
|
||||
},
|
||||
});
|
||||
if (ollamaEndpoint) {
|
||||
try {
|
||||
const response = await fetch(`${ollamaEndpoint}/api/tags`, {
|
||||
headers: {
|
||||
'Content-Type': 'application/json',
|
||||
},
|
||||
});
|
||||
|
||||
const {models: ollamaModels} = (await response.json()) as any;
|
||||
const { models: ollamaModels } = (await response.json()) as any;
|
||||
|
||||
models['ollama'] = ollamaModels.reduce((acc, model) => {
|
||||
acc[model.model] = new ChatOllama({
|
||||
baseUrl: ollamaEndpoint,
|
||||
model: model.model,
|
||||
temperature: 0.7,
|
||||
});
|
||||
return acc;
|
||||
}, {});
|
||||
} catch (err) {
|
||||
logger.error(`Error loading Ollama models: ${err}`);
|
||||
}
|
||||
models['ollama'] = ollamaModels.reduce((acc, model) => {
|
||||
acc[model.model] = new ChatOllama({
|
||||
baseUrl: ollamaEndpoint,
|
||||
model: model.model,
|
||||
temperature: 0.7,
|
||||
});
|
||||
return acc;
|
||||
}, {});
|
||||
} catch (err) {
|
||||
logger.error(`Error loading Ollama models: ${err}`);
|
||||
}
|
||||
}
|
||||
|
||||
models['custom_openai'] = {};
|
||||
models['custom_openai'] = {};
|
||||
|
||||
if (customModels && customModels.length > 0) {
|
||||
models['custom'] = {};
|
||||
try {
|
||||
customModels.forEach((model) => {
|
||||
if (model.provider === "openai") {
|
||||
models['custom'] = {
|
||||
...models['custom'],
|
||||
[model.name]: new ChatOpenAI({
|
||||
openAIApiKey: model.api_key,
|
||||
modelName: model.name,
|
||||
temperature: 0.7,
|
||||
configuration: {
|
||||
baseURL: model.base_url,
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
});
|
||||
} catch (err) {
|
||||
logger.error(`Error loading custom models: ${err}`);
|
||||
if (customModels && customModels.length > 0) {
|
||||
models['custom'] = {};
|
||||
try {
|
||||
customModels.forEach((model) => {
|
||||
if (model.provider === 'openai') {
|
||||
models['custom'] = {
|
||||
...models['custom'],
|
||||
[model.name]: new ChatOpenAI({
|
||||
openAIApiKey: model.api_key,
|
||||
modelName: model.name,
|
||||
temperature: 0.7,
|
||||
configuration: {
|
||||
baseURL: model.base_url,
|
||||
},
|
||||
}),
|
||||
};
|
||||
}
|
||||
});
|
||||
} catch (err) {
|
||||
logger.error(`Error loading custom models: ${err}`);
|
||||
}
|
||||
}
|
||||
|
||||
return models;
|
||||
return models;
|
||||
};
|
||||
|
||||
export const getAvailableEmbeddingModelProviders = async () => {
|
||||
const openAIApiKey = getOpenaiApiKey();
|
||||
const ollamaEndpoint = getOllamaApiEndpoint();
|
||||
const customEmbeddingModels = getCustomEmbeddingModels();
|
||||
const openAIApiKey = getOpenaiApiKey();
|
||||
const ollamaEndpoint = getOllamaApiEndpoint();
|
||||
const customEmbeddingModels = getCustomEmbeddingModels();
|
||||
|
||||
const models = {};
|
||||
|
||||
if (openAIApiKey) {
|
||||
try {
|
||||
models['openai'] = {
|
||||
'Text embedding 3 small': new OpenAIEmbeddings({
|
||||
openAIApiKey,
|
||||
modelName: 'text-embedding-3-small',
|
||||
}, {baseURL: "http://10.0.1.2:5000/v1"}),
|
||||
'Text embedding 3 large': new OpenAIEmbeddings({
|
||||
openAIApiKey,
|
||||
modelName: 'text-embedding-3-large',
|
||||
}),
|
||||
};
|
||||
} catch (err) {
|
||||
logger.error(`Error loading OpenAI embeddings: ${err}`);
|
||||
}
|
||||
}
|
||||
|
||||
if (ollamaEndpoint) {
|
||||
try {
|
||||
const response = await fetch(`${ollamaEndpoint}/api/tags`, {
|
||||
headers: {
|
||||
'Content-Type': 'application/json',
|
||||
},
|
||||
});
|
||||
|
||||
const {models: ollamaModels} = (await response.json()) as any;
|
||||
|
||||
models['ollama'] = ollamaModels.reduce((acc, model) => {
|
||||
acc[model.model] = new OllamaEmbeddings({
|
||||
baseUrl: ollamaEndpoint,
|
||||
model: model.model,
|
||||
});
|
||||
return acc;
|
||||
}, {});
|
||||
} catch (err) {
|
||||
logger.error(`Error loading Ollama embeddings: ${err}`);
|
||||
}
|
||||
}
|
||||
|
||||
if (customEmbeddingModels && customEmbeddingModels.length > 0) {
|
||||
models['custom'] = {};
|
||||
try {
|
||||
customEmbeddingModels.forEach((model) => {
|
||||
if (model.provider === "openai") {
|
||||
models['custom'] = {
|
||||
...models['custom'],
|
||||
[model.name]: new OpenAIEmbeddings({
|
||||
openAIApiKey: model.api_key,
|
||||
modelName: model.model,
|
||||
},
|
||||
{
|
||||
baseURL: model.base_url,
|
||||
}),
|
||||
}
|
||||
}
|
||||
});
|
||||
} catch (err) {
|
||||
logger.error(`Error loading custom models: ${err}`);
|
||||
}
|
||||
}
|
||||
const models = {};
|
||||
|
||||
if (openAIApiKey) {
|
||||
try {
|
||||
models['local'] = {
|
||||
'BGE Small': new HuggingFaceTransformersEmbeddings({
|
||||
modelName: 'Xenova/bge-small-en-v1.5',
|
||||
}),
|
||||
'GTE Small': new HuggingFaceTransformersEmbeddings({
|
||||
modelName: 'Xenova/gte-small',
|
||||
}),
|
||||
'Bert Multilingual': new HuggingFaceTransformersEmbeddings({
|
||||
modelName: 'Xenova/bert-base-multilingual-uncased',
|
||||
}),
|
||||
};
|
||||
models['openai'] = {
|
||||
'Text embedding 3 small': new OpenAIEmbeddings(
|
||||
{
|
||||
openAIApiKey,
|
||||
modelName: 'text-embedding-3-small',
|
||||
},
|
||||
{ baseURL: 'http://10.0.1.2:5000/v1' },
|
||||
),
|
||||
'Text embedding 3 large': new OpenAIEmbeddings({
|
||||
openAIApiKey,
|
||||
modelName: 'text-embedding-3-large',
|
||||
}),
|
||||
};
|
||||
} catch (err) {
|
||||
logger.error(`Error loading local embeddings: ${err}`);
|
||||
logger.error(`Error loading OpenAI embeddings: ${err}`);
|
||||
}
|
||||
}
|
||||
|
||||
return models;
|
||||
if (ollamaEndpoint) {
|
||||
try {
|
||||
const response = await fetch(`${ollamaEndpoint}/api/tags`, {
|
||||
headers: {
|
||||
'Content-Type': 'application/json',
|
||||
},
|
||||
});
|
||||
|
||||
const { models: ollamaModels } = (await response.json()) as any;
|
||||
|
||||
models['ollama'] = ollamaModels.reduce((acc, model) => {
|
||||
acc[model.model] = new OllamaEmbeddings({
|
||||
baseUrl: ollamaEndpoint,
|
||||
model: model.model,
|
||||
});
|
||||
return acc;
|
||||
}, {});
|
||||
} catch (err) {
|
||||
logger.error(`Error loading Ollama embeddings: ${err}`);
|
||||
}
|
||||
}
|
||||
|
||||
if (customEmbeddingModels && customEmbeddingModels.length > 0) {
|
||||
models['custom'] = {};
|
||||
try {
|
||||
customEmbeddingModels.forEach((model) => {
|
||||
if (model.provider === 'openai') {
|
||||
models['custom'] = {
|
||||
...models['custom'],
|
||||
[model.name]: new OpenAIEmbeddings(
|
||||
{
|
||||
openAIApiKey: model.api_key,
|
||||
modelName: model.model,
|
||||
},
|
||||
{
|
||||
baseURL: model.base_url,
|
||||
},
|
||||
),
|
||||
};
|
||||
}
|
||||
});
|
||||
} catch (err) {
|
||||
logger.error(`Error loading custom models: ${err}`);
|
||||
}
|
||||
}
|
||||
|
||||
try {
|
||||
models['local'] = {
|
||||
'BGE Small': new HuggingFaceTransformersEmbeddings({
|
||||
modelName: 'Xenova/bge-small-en-v1.5',
|
||||
}),
|
||||
'GTE Small': new HuggingFaceTransformersEmbeddings({
|
||||
modelName: 'Xenova/gte-small',
|
||||
}),
|
||||
'Bert Multilingual': new HuggingFaceTransformersEmbeddings({
|
||||
modelName: 'Xenova/bert-base-multilingual-uncased',
|
||||
}),
|
||||
};
|
||||
} catch (err) {
|
||||
logger.error(`Error loading local embeddings: ${err}`);
|
||||
}
|
||||
|
||||
return models;
|
||||
};
|
||||
|
|
|
@ -2,7 +2,7 @@ import express from 'express';
|
|||
import handleImageSearch from '../agents/imageSearchAgent';
|
||||
import { BaseChatModel } from '@langchain/core/language_models/chat_models';
|
||||
import { getAvailableChatModelProviders } from '../lib/providers';
|
||||
import { HumanMessage, AIMessage } from '@langchain/core/messages';
|
||||
import { AIMessage, HumanMessage } from '@langchain/core/messages';
|
||||
import logger from '../utils/logger';
|
||||
|
||||
const router = express.Router();
|
||||
|
|
|
@ -1,8 +1,8 @@
|
|||
import express from 'express';
|
||||
import generateSuggestions from '../agents/suggestionGeneratorAgent';
|
||||
import {BaseChatModel} from '@langchain/core/language_models/chat_models';
|
||||
import {getAvailableChatModelProviders} from '../lib/providers';
|
||||
import {AIMessage, HumanMessage} from '@langchain/core/messages';
|
||||
import { BaseChatModel } from '@langchain/core/language_models/chat_models';
|
||||
import { getAvailableChatModelProviders } from '../lib/providers';
|
||||
import { AIMessage, HumanMessage } from '@langchain/core/messages';
|
||||
import logger from '../utils/logger';
|
||||
|
||||
const router = express.Router();
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
import express from 'express';
|
||||
import { BaseChatModel } from '@langchain/core/language_models/chat_models';
|
||||
import { getAvailableChatModelProviders } from '../lib/providers';
|
||||
import { HumanMessage, AIMessage } from '@langchain/core/messages';
|
||||
import { AIMessage, HumanMessage } from '@langchain/core/messages';
|
||||
import logger from '../utils/logger';
|
||||
import handleVideoSearch from '../agents/videoSearchAgent';
|
||||
|
||||
|
|
|
@ -1,11 +1,14 @@
|
|||
import {WebSocket} from 'ws';
|
||||
import {handleMessage} from './messageHandler';
|
||||
import {getAvailableChatModelProviders, getAvailableEmbeddingModelProviders,} from '../lib/providers';
|
||||
import {BaseChatModel} from '@langchain/core/language_models/chat_models';
|
||||
import type {Embeddings} from '@langchain/core/embeddings';
|
||||
import type {IncomingMessage} from 'http';
|
||||
import { WebSocket } from 'ws';
|
||||
import { handleMessage } from './messageHandler';
|
||||
import {
|
||||
getAvailableChatModelProviders,
|
||||
getAvailableEmbeddingModelProviders,
|
||||
} from '../lib/providers';
|
||||
import { BaseChatModel } from '@langchain/core/language_models/chat_models';
|
||||
import type { Embeddings } from '@langchain/core/embeddings';
|
||||
import type { IncomingMessage } from 'http';
|
||||
import logger from '../utils/logger';
|
||||
import {ChatOpenAI} from '@langchain/openai';
|
||||
import { ChatOpenAI } from '@langchain/openai';
|
||||
|
||||
export const handleConnection = async (
|
||||
ws: WebSocket,
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
import { EventEmitter, WebSocket } from 'ws';
|
||||
import { BaseMessage, AIMessage, HumanMessage } from '@langchain/core/messages';
|
||||
import { AIMessage, BaseMessage, HumanMessage } from '@langchain/core/messages';
|
||||
import handleWebSearch from '../agents/webSearchAgent';
|
||||
import handleAcademicSearch from '../agents/academicSearchAgent';
|
||||
import handleWritingAssistant from '../agents/writingAssistant';
|
||||
|
|
|
@ -1,6 +1,8 @@
|
|||
{
|
||||
"compilerOptions": {
|
||||
"lib": ["ESNext"],
|
||||
"lib": [
|
||||
"ESNext"
|
||||
],
|
||||
"module": "Node16",
|
||||
"moduleResolution": "Node16",
|
||||
"target": "ESNext",
|
||||
|
@ -13,6 +15,11 @@
|
|||
"skipLibCheck": true,
|
||||
"skipDefaultLibCheck": true
|
||||
},
|
||||
"include": ["src"],
|
||||
"exclude": ["node_modules", "**/*.spec.ts"]
|
||||
"include": [
|
||||
"src"
|
||||
],
|
||||
"exclude": [
|
||||
"node_modules",
|
||||
"**/*.spec.ts"
|
||||
]
|
||||
}
|
||||
|
|
|
@ -3,11 +3,11 @@
|
|||
@tailwind utilities;
|
||||
|
||||
@layer base {
|
||||
.overflow-hidden-scrollable {
|
||||
-ms-overflow-style: none;
|
||||
}
|
||||
.overflow-hidden-scrollable {
|
||||
-ms-overflow-style: none;
|
||||
}
|
||||
|
||||
.overflow-hidden-scrollable::-webkit-scrollbar {
|
||||
display: none;
|
||||
}
|
||||
.overflow-hidden-scrollable::-webkit-scrollbar {
|
||||
display: none;
|
||||
}
|
||||
}
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
/* eslint-disable @next/next/no-img-element */
|
||||
import { Dialog, Transition } from '@headlessui/react';
|
||||
import { Document } from '@langchain/core/documents';
|
||||
import { Fragment, useState } from 'react';
|
||||
import {Dialog, Transition} from '@headlessui/react';
|
||||
import {Document} from '@langchain/core/documents';
|
||||
import {Fragment, useState} from 'react';
|
||||
|
||||
const MessageSources = ({ sources }: { sources: Document[] }) => {
|
||||
const [isDialogOpen, setIsDialogOpen] = useState(false);
|
||||
|
|
|
@ -1,9 +1,9 @@
|
|||
/* eslint-disable @next/next/no-img-element */
|
||||
import { ImagesIcon, PlusIcon } from 'lucide-react';
|
||||
import { useState } from 'react';
|
||||
import {ImagesIcon, PlusIcon} from 'lucide-react';
|
||||
import {useState} from 'react';
|
||||
import Lightbox from 'yet-another-react-lightbox';
|
||||
import 'yet-another-react-lightbox/styles.css';
|
||||
import { Message } from './ChatWindow';
|
||||
import {Message} from './ChatWindow';
|
||||
|
||||
type Image = {
|
||||
url: string;
|
||||
|
|
|
@ -1,9 +1,9 @@
|
|||
/* eslint-disable @next/next/no-img-element */
|
||||
import { PlayCircle, PlayIcon, PlusIcon, VideoIcon } from 'lucide-react';
|
||||
import { useState } from 'react';
|
||||
import Lightbox, { GenericSlide, VideoSlide } from 'yet-another-react-lightbox';
|
||||
import {PlayCircle, PlusIcon, VideoIcon} from 'lucide-react';
|
||||
import {useState} from 'react';
|
||||
import Lightbox, {GenericSlide, VideoSlide} from 'yet-another-react-lightbox';
|
||||
import 'yet-another-react-lightbox/styles.css';
|
||||
import { Message } from './ChatWindow';
|
||||
import {Message} from './ChatWindow';
|
||||
|
||||
type Video = {
|
||||
url: string;
|
||||
|
|
|
@ -1,12 +1,11 @@
|
|||
'use client';
|
||||
|
||||
import { cn } from '@/lib/utils';
|
||||
import { BookOpenText, Home, Search, SquarePen, Settings } from 'lucide-react';
|
||||
import { BookOpenText, Home, Search, Settings, SquarePen } from 'lucide-react';
|
||||
import Link from 'next/link';
|
||||
import { useSelectedLayoutSegments } from 'next/navigation';
|
||||
import React, { Fragment, useState } from 'react';
|
||||
import React, { useState } from 'react';
|
||||
import Layout from './Layout';
|
||||
import { Dialog, Transition } from '@headlessui/react';
|
||||
import SettingsDialog from './SettingsDialog';
|
||||
|
||||
const Sidebar = ({ children }: { children: React.ReactNode }) => {
|
||||
|
|
|
@ -1,12 +1,12 @@
|
|||
/** @type {import('next').NextConfig} */
|
||||
const nextConfig = {
|
||||
images: {
|
||||
remotePatterns: [
|
||||
{
|
||||
hostname: 's2.googleusercontent.com',
|
||||
},
|
||||
],
|
||||
},
|
||||
images: {
|
||||
remotePatterns: [
|
||||
{
|
||||
hostname: 's2.googleusercontent.com',
|
||||
},
|
||||
],
|
||||
},
|
||||
};
|
||||
|
||||
export default nextConfig;
|
||||
|
|
|
@ -1 +1,6 @@
|
|||
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 394 80"><path fill="#000" d="M262 0h68.5v12.7h-27.2v66.6h-13.6V12.7H262V0ZM149 0v12.7H94v20.4h44.3v12.6H94v21h55v12.6H80.5V0h68.7zm34.3 0h-17.8l63.8 79.4h17.9l-32-39.7 32-39.6h-17.9l-23 28.6-23-28.6zm18.3 56.7-9-11-27.1 33.7h17.8l18.3-22.7z"/><path fill="#000" d="M81 79.3 17 0H0v79.3h13.6V17l50.2 62.3H81Zm252.6-.4c-1 0-1.8-.4-2.5-1s-1.1-1.6-1.1-2.6.3-1.8 1-2.5 1.6-1 2.6-1 1.8.3 2.5 1a3.4 3.4 0 0 1 .6 4.3 3.7 3.7 0 0 1-3 1.8zm23.2-33.5h6v23.3c0 2.1-.4 4-1.3 5.5a9.1 9.1 0 0 1-3.8 3.5c-1.6.8-3.5 1.3-5.7 1.3-2 0-3.7-.4-5.3-1s-2.8-1.8-3.7-3.2c-.9-1.3-1.4-3-1.4-5h6c.1.8.3 1.6.7 2.2s1 1.2 1.6 1.5c.7.4 1.5.5 2.4.5 1 0 1.8-.2 2.4-.6a4 4 0 0 0 1.6-1.8c.3-.8.5-1.8.5-3V45.5zm30.9 9.1a4.4 4.4 0 0 0-2-3.3 7.5 7.5 0 0 0-4.3-1.1c-1.3 0-2.4.2-3.3.5-.9.4-1.6 1-2 1.6a3.5 3.5 0 0 0-.3 4c.3.5.7.9 1.3 1.2l1.8 1 2 .5 3.2.8c1.3.3 2.5.7 3.7 1.2a13 13 0 0 1 3.2 1.8 8.1 8.1 0 0 1 3 6.5c0 2-.5 3.7-1.5 5.1a10 10 0 0 1-4.4 3.5c-1.8.8-4.1 1.2-6.8 1.2-2.6 0-4.9-.4-6.8-1.2-2-.8-3.4-2-4.5-3.5a10 10 0 0 1-1.7-5.6h6a5 5 0 0 0 3.5 4.6c1 .4 2.2.6 3.4.6 1.3 0 2.5-.2 3.5-.6 1-.4 1.8-1 2.4-1.7a4 4 0 0 0 .8-2.4c0-.9-.2-1.6-.7-2.2a11 11 0 0 0-2.1-1.4l-3.2-1-3.8-1c-2.8-.7-5-1.7-6.6-3.2a7.2 7.2 0 0 1-2.4-5.7 8 8 0 0 1 1.7-5 10 10 0 0 1 4.3-3.5c2-.8 4-1.2 6.4-1.2 2.3 0 4.4.4 6.2 1.2 1.8.8 3.2 2 4.3 3.4 1 1.4 1.5 3 1.5 5h-5.8z"/></svg>
|
||||
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 394 80">
|
||||
<path fill="#000"
|
||||
d="M262 0h68.5v12.7h-27.2v66.6h-13.6V12.7H262V0ZM149 0v12.7H94v20.4h44.3v12.6H94v21h55v12.6H80.5V0h68.7zm34.3 0h-17.8l63.8 79.4h17.9l-32-39.7 32-39.6h-17.9l-23 28.6-23-28.6zm18.3 56.7-9-11-27.1 33.7h17.8l18.3-22.7z"/>
|
||||
<path fill="#000"
|
||||
d="M81 79.3 17 0H0v79.3h13.6V17l50.2 62.3H81Zm252.6-.4c-1 0-1.8-.4-2.5-1s-1.1-1.6-1.1-2.6.3-1.8 1-2.5 1.6-1 2.6-1 1.8.3 2.5 1a3.4 3.4 0 0 1 .6 4.3 3.7 3.7 0 0 1-3 1.8zm23.2-33.5h6v23.3c0 2.1-.4 4-1.3 5.5a9.1 9.1 0 0 1-3.8 3.5c-1.6.8-3.5 1.3-5.7 1.3-2 0-3.7-.4-5.3-1s-2.8-1.8-3.7-3.2c-.9-1.3-1.4-3-1.4-5h6c.1.8.3 1.6.7 2.2s1 1.2 1.6 1.5c.7.4 1.5.5 2.4.5 1 0 1.8-.2 2.4-.6a4 4 0 0 0 1.6-1.8c.3-.8.5-1.8.5-3V45.5zm30.9 9.1a4.4 4.4 0 0 0-2-3.3 7.5 7.5 0 0 0-4.3-1.1c-1.3 0-2.4.2-3.3.5-.9.4-1.6 1-2 1.6a3.5 3.5 0 0 0-.3 4c.3.5.7.9 1.3 1.2l1.8 1 2 .5 3.2.8c1.3.3 2.5.7 3.7 1.2a13 13 0 0 1 3.2 1.8 8.1 8.1 0 0 1 3 6.5c0 2-.5 3.7-1.5 5.1a10 10 0 0 1-4.4 3.5c-1.8.8-4.1 1.2-6.8 1.2-2.6 0-4.9-.4-6.8-1.2-2-.8-3.4-2-4.5-3.5a10 10 0 0 1-1.7-5.6h6a5 5 0 0 0 3.5 4.6c1 .4 2.2.6 3.4.6 1.3 0 2.5-.2 3.5-.6 1-.4 1.8-1 2.4-1.7a4 4 0 0 0 .8-2.4c0-.9-.2-1.6-.7-2.2a11 11 0 0 0-2.1-1.4l-3.2-1-3.8-1c-2.8-.7-5-1.7-6.6-3.2a7.2 7.2 0 0 1-2.4-5.7 8 8 0 0 1 1.7-5 10 10 0 0 1 4.3-3.5c2-.8 4-1.2 6.4-1.2 2.3 0 4.4.4 6.2 1.2 1.8.8 3.2 2 4.3 3.4 1 1.4 1.5 3 1.5 5h-5.8z"/>
|
||||
</svg>
|
Before Width: | Height: | Size: 1.3 KiB After Width: | Height: | Size: 1.4 KiB |
|
@ -1 +1,4 @@
|
|||
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 283 64"><path fill="black" d="M141 16c-11 0-19 7-19 18s9 18 20 18c7 0 13-3 16-7l-7-5c-2 3-6 4-9 4-5 0-9-3-10-7h28v-3c0-11-8-18-19-18zm-9 15c1-4 4-7 9-7s8 3 9 7h-18zm117-15c-11 0-19 7-19 18s9 18 20 18c6 0 12-3 16-7l-8-5c-2 3-5 4-8 4-5 0-9-3-11-7h28l1-3c0-11-8-18-19-18zm-10 15c2-4 5-7 10-7s8 3 9 7h-19zm-39 3c0 6 4 10 10 10 4 0 7-2 9-5l8 5c-3 5-9 8-17 8-11 0-19-7-19-18s8-18 19-18c8 0 14 3 17 8l-8 5c-2-3-5-5-9-5-6 0-10 4-10 10zm83-29v46h-9V5h9zM37 0l37 64H0L37 0zm92 5-27 48L74 5h10l18 30 17-30h10zm59 12v10l-3-1c-6 0-10 4-10 10v15h-9V17h9v9c0-5 6-9 13-9z"/></svg>
|
||||
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 283 64">
|
||||
<path fill="black"
|
||||
d="M141 16c-11 0-19 7-19 18s9 18 20 18c7 0 13-3 16-7l-7-5c-2 3-6 4-9 4-5 0-9-3-10-7h28v-3c0-11-8-18-19-18zm-9 15c1-4 4-7 9-7s8 3 9 7h-18zm117-15c-11 0-19 7-19 18s9 18 20 18c6 0 12-3 16-7l-8-5c-2 3-5 4-8 4-5 0-9-3-11-7h28l1-3c0-11-8-18-19-18zm-10 15c2-4 5-7 10-7s8 3 9 7h-19zm-39 3c0 6 4 10 10 10 4 0 7-2 9-5l8 5c-3 5-9 8-17 8-11 0-19-7-19-18s8-18 19-18c8 0 14 3 17 8l-8 5c-2-3-5-5-9-5-6 0-10 4-10 10zm83-29v46h-9V5h9zM37 0l37 64H0L37 0zm92 5-27 48L74 5h10l18 30 17-30h10zm59 12v10l-3-1c-6 0-10 4-10 10v15h-9V17h9v9c0-5 6-9 13-9z"/>
|
||||
</svg>
|
Before Width: | Height: | Size: 629 B After Width: | Height: | Size: 645 B |
|
@ -1,6 +1,10 @@
|
|||
{
|
||||
"compilerOptions": {
|
||||
"lib": ["dom", "dom.iterable", "esnext"],
|
||||
"lib": [
|
||||
"dom",
|
||||
"dom.iterable",
|
||||
"esnext"
|
||||
],
|
||||
"allowJs": true,
|
||||
"skipLibCheck": true,
|
||||
"strict": true,
|
||||
|
@ -18,9 +22,18 @@
|
|||
}
|
||||
],
|
||||
"paths": {
|
||||
"@/*": ["./*"]
|
||||
"@/*": [
|
||||
"./*"
|
||||
]
|
||||
}
|
||||
},
|
||||
"include": ["next-env.d.ts", "**/*.ts", "**/*.tsx", ".next/types/**/*.ts"],
|
||||
"exclude": ["node_modules"]
|
||||
"include": [
|
||||
"next-env.d.ts",
|
||||
"**/*.ts",
|
||||
"**/*.tsx",
|
||||
".next/types/**/*.ts"
|
||||
],
|
||||
"exclude": [
|
||||
"node_modules"
|
||||
]
|
||||
}
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue