chore: format project

This commit is contained in:
Justin Luoma 2024-05-25 08:16:13 -04:00
parent 79d4d87f24
commit cd7722afdb
31 changed files with 504 additions and 409 deletions

View file

@ -1,13 +1,17 @@
# How to Contribute to Perplexica # How to Contribute to Perplexica
Hey there, thanks for deciding to contribute to Perplexica. Anything you help with will support the development of Perplexica and will make it better. Let's walk you through the key aspects to ensure your contributions are effective and in harmony with the project's setup. Hey there, thanks for deciding to contribute to Perplexica. Anything you help with will support the development of
Perplexica and will make it better. Let's walk you through the key aspects to ensure your contributions are effective
and in harmony with the project's setup.
## Project Structure ## Project Structure
Perplexica's design consists of two main domains: Perplexica's design consists of two main domains:
- **Frontend (`ui` directory)**: This is a Next.js application holding all user interface components. It's a self-contained environment that manages everything the user interacts with. - **Frontend (`ui` directory)**: This is a Next.js application holding all user interface components. It's a
- **Backend (root and `src` directory)**: The backend logic is situated in the `src` folder, but the root directory holds the main `package.json` for backend dependency management. self-contained environment that manages everything the user interacts with.
- **Backend (root and `src` directory)**: The backend logic is situated in the `src` folder, but the root directory
holds the main `package.json` for backend dependency management.
## Setting Up Your Environment ## Setting Up Your Environment
@ -22,18 +26,23 @@ Before diving into coding, setting up your local environment is key. Here's what
### Frontend ### Frontend
1. Navigate to the `ui` folder and repeat the process of renaming `.env.example` to `.env`, making sure to provide the frontend-specific variables. 1. Navigate to the `ui` folder and repeat the process of renaming `.env.example` to `.env`, making sure to provide the
frontend-specific variables.
2. Execute `npm install` within the `ui` directory to get the frontend dependencies ready. 2. Execute `npm install` within the `ui` directory to get the frontend dependencies ready.
3. Launch the frontend development server with `npm run dev`. 3. Launch the frontend development server with `npm run dev`.
**Please note**: Docker configurations are present for setting up production environments, whereas `npm run dev` is used for development purposes. **Please note**: Docker configurations are present for setting up production environments, whereas `npm run dev` is used
for development purposes.
## Coding and Contribution Practices ## Coding and Contribution Practices
Before committing changes: Before committing changes:
1. Ensure that your code functions correctly by thorough testing. 1. Ensure that your code functions correctly by thorough testing.
2. Always run `npm run format:write` to format your code according to the project's coding standards. This helps maintain consistency and code quality. 2. Always run `npm run format:write` to format your code according to the project's coding standards. This helps
3. We currently do not have a code of conduct, but it is in the works. In the meantime, please be mindful of how you engage with the project and its community. maintain consistency and code quality.
3. We currently do not have a code of conduct, but it is in the works. In the meantime, please be mindful of how you
engage with the project and its community.
Following these steps will help maintain the integrity of Perplexica's codebase and facilitate a smoother integration of your valuable contributions. Thank you for your support and commitment to improving Perplexica. Following these steps will help maintain the integrity of Perplexica's codebase and facilitate a smoother integration of
your valuable contributions. Thank you for your support and commitment to improving Perplexica.

101
README.md
View file

@ -8,24 +8,29 @@
- [Preview](#preview) - [Preview](#preview)
- [Features](#features) - [Features](#features)
- [Installation](#installation) - [Installation](#installation)
- [Getting Started with Docker (Recommended)](#getting-started-with-docker-recommended) - [Getting Started with Docker (Recommended)](#getting-started-with-docker-recommended)
- [Non-Docker Installation](#non-docker-installation) - [Non-Docker Installation](#non-docker-installation)
- [Ollama connection errors](#ollama-connection-errors) - [Ollama connection errors](#ollama-connection-errors)
- [Using as a Search Engine](#using-as-a-search-engine) - [Using as a Search Engine](#using-as-a-search-engine)
- [One-Click Deployment](#one-click-deployment) - [One-Click Deployment](#one-click-deployment)
- [Upcoming Features](#upcoming-features) - [Upcoming Features](#upcoming-features)
- [Support Us](#support-us) - [Support Us](#support-us)
- [Donations](#donations) - [Donations](#donations)
- [Contribution](#contribution) - [Contribution](#contribution)
- [Help and Support](#help-and-support) - [Help and Support](#help-and-support)
## Overview ## Overview
Perplexica is an open-source AI-powered searching tool or an AI-powered search engine that goes deep into the internet to find answers. Inspired by Perplexity AI, it's an open-source option that not just searches the web but understands your questions. It uses advanced machine learning algorithms like similarity searching and embeddings to refine results and provides clear answers with sources cited. Perplexica is an open-source AI-powered searching tool or an AI-powered search engine that goes deep into the internet
to find answers. Inspired by Perplexity AI, it's an open-source option that not just searches the web but understands
your questions. It uses advanced machine learning algorithms like similarity searching and embeddings to refine results
and provides clear answers with sources cited.
Using SearxNG to stay current and fully open source, Perplexica ensures you always get the most up-to-date information without compromising your privacy. Using SearxNG to stay current and fully open source, Perplexica ensures you always get the most up-to-date information
without compromising your privacy.
Want to know more about its architecture and how it works? You can read it [here](https://github.com/ItzCrazyKns/Perplexica/tree/master/docs/architecture/README.md). Want to know more about its architecture and how it works? You can read
it [here](https://github.com/ItzCrazyKns/Perplexica/tree/master/docs/architecture/README.md).
## Preview ## Preview
@ -35,18 +40,24 @@ Want to know more about its architecture and how it works? You can read it [here
- **Local LLMs**: You can make use local LLMs such as Llama3 and Mixtral using Ollama. - **Local LLMs**: You can make use local LLMs such as Llama3 and Mixtral using Ollama.
- **Two Main Modes:** - **Two Main Modes:**
- **Copilot Mode:** (In development) Boosts search by generating different queries to find more relevant internet sources. Like normal search instead of just using the context by SearxNG, it visits the top matches and tries to find relevant sources to the user's query directly from the page. - **Copilot Mode:** (In development) Boosts search by generating different queries to find more relevant internet
- **Normal Mode:** Processes your query and performs a web search. sources. Like normal search instead of just using the context by SearxNG, it visits the top matches and tries to
find relevant sources to the user's query directly from the page.
- **Normal Mode:** Processes your query and performs a web search.
- **Focus Modes:** Special modes to better answer specific types of questions. Perplexica currently has 6 focus modes: - **Focus Modes:** Special modes to better answer specific types of questions. Perplexica currently has 6 focus modes:
- **All Mode:** Searches the entire web to find the best results. - **All Mode:** Searches the entire web to find the best results.
- **Writing Assistant Mode:** Helpful for writing tasks that does not require searching the web. - **Writing Assistant Mode:** Helpful for writing tasks that does not require searching the web.
- **Academic Search Mode:** Finds articles and papers, ideal for academic research. - **Academic Search Mode:** Finds articles and papers, ideal for academic research.
- **YouTube Search Mode:** Finds YouTube videos based on the search query. - **YouTube Search Mode:** Finds YouTube videos based on the search query.
- **Wolfram Alpha Search Mode:** Answers queries that need calculations or data analysis using Wolfram Alpha. - **Wolfram Alpha Search Mode:** Answers queries that need calculations or data analysis using Wolfram Alpha.
- **Reddit Search Mode:** Searches Reddit for discussions and opinions related to the query. - **Reddit Search Mode:** Searches Reddit for discussions and opinions related to the query.
- **Current Information:** Some search tools might give you outdated info because they use data from crawling bots and convert them into embeddings and store them in a index. Unlike them, Perplexica uses SearxNG, a metasearch engine to get the results and rerank and get the most relevant source out of it, ensuring you always get the latest information without the overhead of daily data updates. - **Current Information:** Some search tools might give you outdated info because they use data from crawling bots and
convert them into embeddings and store them in a index. Unlike them, Perplexica uses SearxNG, a metasearch engine to
get the results and rerank and get the most relevant source out of it, ensuring you always get the latest information
without the overhead of daily data updates.
It has many more features like image and video search. Some of the planned features are mentioned in [upcoming features](#upcoming-features). It has many more features like image and video search. Some of the planned features are mentioned
in [upcoming features](#upcoming-features).
## Installation ## Installation
@ -65,13 +76,16 @@ There are mainly 2 ways of installing Perplexica - With Docker, Without Docker.
4. Rename the `sample.config.toml` file to `config.toml`. For Docker setups, you need only fill in the following fields: 4. Rename the `sample.config.toml` file to `config.toml`. For Docker setups, you need only fill in the following fields:
- `OPENAI`: Your OpenAI API key. **You only need to fill this if you wish to use OpenAI's models**. - `OPENAI`: Your OpenAI API key. **You only need to fill this if you wish to use OpenAI's models**.
- `OLLAMA`: Your Ollama API URL. You should enter it as `http://host.docker.internal:PORT_NUMBER`. If you installed Ollama on port 11434, use `http://host.docker.internal:11434`. For other ports, adjust accordingly. **You need to fill this if you wish to use Ollama's models instead of OpenAI's**. - `OLLAMA`: Your Ollama API URL. You should enter it as `http://host.docker.internal:PORT_NUMBER`. If you installed
- `GROQ`: Your Groq API key. **You only need to fill this if you wish to use Groq's hosted models** Ollama on port 11434, use `http://host.docker.internal:11434`. For other ports, adjust accordingly. **You need to
fill this if you wish to use Ollama's models instead of OpenAI's**.
- `GROQ`: Your Groq API key. **You only need to fill this if you wish to use Groq's hosted models**
**Note**: You can change these after starting Perplexica from the settings dialog. **Note**: You can change these after starting Perplexica from the settings dialog.
- `SIMILARITY_MEASURE`: The similarity measure to use (This is filled by default; you can leave it as is if you are unsure about it.) - `SIMILARITY_MEASURE`: The similarity measure to use (This is filled by default; you can leave it as is if you are
unsure about it.)
5. Ensure you are in the directory containing the `docker-compose.yaml` file and execute: 5. Ensure you are in the directory containing the `docker-compose.yaml` file and execute:
@ -81,23 +95,29 @@ There are mainly 2 ways of installing Perplexica - With Docker, Without Docker.
6. Wait a few minutes for the setup to complete. You can access Perplexica at http://localhost:3000 in your web browser. 6. Wait a few minutes for the setup to complete. You can access Perplexica at http://localhost:3000 in your web browser.
**Note**: After the containers are built, you can start Perplexica directly from Docker without having to open a terminal. **Note**: After the containers are built, you can start Perplexica directly from Docker without having to open a
terminal.
### Non-Docker Installation ### Non-Docker Installation
1. Clone the repository and rename the `sample.config.toml` file to `config.toml` in the root directory. Ensure you complete all required fields in this file. 1. Clone the repository and rename the `sample.config.toml` file to `config.toml` in the root directory. Ensure you
complete all required fields in this file.
2. Rename the `.env.example` file to `.env` in the `ui` folder and fill in all necessary fields. 2. Rename the `.env.example` file to `.env` in the `ui` folder and fill in all necessary fields.
3. After populating the configuration and environment files, run `npm i` in both the `ui` folder and the root directory. 3. After populating the configuration and environment files, run `npm i` in both the `ui` folder and the root directory.
4. Install the dependencies and then execute `npm run build` in both the `ui` folder and the root directory. 4. Install the dependencies and then execute `npm run build` in both the `ui` folder and the root directory.
5. Finally, start both the frontend and the backend by running `npm run start` in both the `ui` folder and the root directory. 5. Finally, start both the frontend and the backend by running `npm run start` in both the `ui` folder and the root
directory.
**Note**: Using Docker is recommended as it simplifies the setup process, especially for managing environment variables and dependencies. **Note**: Using Docker is recommended as it simplifies the setup process, especially for managing environment variables
and dependencies.
See the [installation documentation](https://github.com/ItzCrazyKns/Perplexica/tree/master/docs/installation) for more information like exposing it your network, etc. See the [installation documentation](https://github.com/ItzCrazyKns/Perplexica/tree/master/docs/installation) for more
information like exposing it your network, etc.
### Ollama connection errors ### Ollama connection errors
If you're facing an Ollama connection error, it is often related to the backend not being able to connect to Ollama's API. How can you fix it? You can fix it by updating your Ollama API URL in the settings menu to the following: If you're facing an Ollama connection error, it is often related to the backend not being able to connect to Ollama's
API. How can you fix it? You can fix it by updating your Ollama API URL in the settings menu to the following:
On Windows: `http://host.docker.internal:11434`<br> On Windows: `http://host.docker.internal:11434`<br>
On Mac: `http://host.docker.internal:11434`<br> On Mac: `http://host.docker.internal:11434`<br>
@ -107,11 +127,13 @@ You need to edit the ports accordingly.
## Using as a Search Engine ## Using as a Search Engine
If you wish to use Perplexica as an alternative to traditional search engines like Google or Bing, or if you want to add a shortcut for quick access from your browser's search bar, follow these steps: If you wish to use Perplexica as an alternative to traditional search engines like Google or Bing, or if you want to add
a shortcut for quick access from your browser's search bar, follow these steps:
1. Open your browser's settings. 1. Open your browser's settings.
2. Navigate to the 'Search Engines' section. 2. Navigate to the 'Search Engines' section.
3. Add a new site search with the following URL: `http://localhost:3000/?q=%s`. Replace `localhost` with your IP address or domain name, and `3000` with the port number if Perplexica is not hosted locally. 3. Add a new site search with the following URL: `http://localhost:3000/?q=%s`. Replace `localhost` with your IP address
or domain name, and `3000` with the port number if Perplexica is not hosted locally.
4. Click the add button. Now, you can use Perplexica directly from your browser's search bar. 4. Click the add button. Now, you can use Perplexica directly from your browser's search bar.
## One-Click Deployment ## One-Click Deployment
@ -128,11 +150,13 @@ If you wish to use Perplexica as an alternative to traditional search engines li
## Support Us ## Support Us
If you find Perplexica useful, consider giving us a star on GitHub. This helps more people discover Perplexica and supports the development of new features. Your support is greatly appreciated. If you find Perplexica useful, consider giving us a star on GitHub. This helps more people discover Perplexica and
supports the development of new features. Your support is greatly appreciated.
### Donations ### Donations
We also accept donations to help sustain our project. If you would like to contribute, you can use the following button to make a donation in cryptocurrency. Thank you for your support! We also accept donations to help sustain our project. If you would like to contribute, you can use the following button
to make a donation in cryptocurrency. Thank you for your support!
<a href="https://nowpayments.io/donation?api_key=RFFKJH1-GRR4DQG-HFV1DZP-00G6MMK&source=lk_donation&medium=referral" target="_blank"> <a href="https://nowpayments.io/donation?api_key=RFFKJH1-GRR4DQG-HFV1DZP-00G6MMK&source=lk_donation&medium=referral" target="_blank">
<img src="https://nowpayments.io/images/embeds/donation-button-white.svg" alt="Crypto donation button by NOWPayments"> <img src="https://nowpayments.io/images/embeds/donation-button-white.svg" alt="Crypto donation button by NOWPayments">
@ -140,10 +164,17 @@ We also accept donations to help sustain our project. If you would like to contr
## Contribution ## Contribution
Perplexica is built on the idea that AI and large language models should be easy for everyone to use. If you find bugs or have ideas, please share them in via GitHub Issues. For more information on contributing to Perplexica you can read the [CONTRIBUTING.md](CONTRIBUTING.md) file to learn more about Perplexica and how you can contribute to it. Perplexica is built on the idea that AI and large language models should be easy for everyone to use. If you find bugs
or have ideas, please share them in via GitHub Issues. For more information on contributing to Perplexica you can read
the [CONTRIBUTING.md](CONTRIBUTING.md) file to learn more about Perplexica and how you can contribute to it.
## Help and Support ## Help and Support
If you have any questions or feedback, please feel free to reach out to us. You can create an issue on GitHub or join our Discord server. There, you can connect with other users, share your experiences and reviews, and receive more personalized help. [Click here](https://discord.gg/EFwsmQDgAu) to join the Discord server. To discuss matters outside of regular support, feel free to contact me on Discord at `itzcrazykns`. If you have any questions or feedback, please feel free to reach out to us. You can create an issue on GitHub or join
our Discord server. There, you can connect with other users, share your experiences and reviews, and receive more
personalized help. [Click here](https://discord.gg/EFwsmQDgAu) to join the Discord server. To discuss matters outside of
regular support, feel free to contact me on Discord at `itzcrazykns`.
Thank you for exploring Perplexica, the AI-powered search engine designed to enhance your search experience. We are constantly working to improve Perplexica and expand its capabilities. We value your feedback and contributions which help us make Perplexica even better. Don't forget to check back for updates and new features! Thank you for exploring Perplexica, the AI-powered search engine designed to enhance your search experience. We are
constantly working to improve Perplexica and expand its capabilities. We value your feedback and contributions which
help us make Perplexica even better. Don't forget to check back for updates and new features!

View file

@ -2,10 +2,15 @@
Perplexica's architecture consists of the following key components: Perplexica's architecture consists of the following key components:
1. **User Interface**: A web-based interface that allows users to interact with Perplexica for searching images, videos, and much more. 1. **User Interface**: A web-based interface that allows users to interact with Perplexica for searching images, videos,
2. **Agent/Chains**: These components predict Perplexica's next actions, understand user queries, and decide whether a web search is necessary. and much more.
2. **Agent/Chains**: These components predict Perplexica's next actions, understand user queries, and decide whether a
web search is necessary.
3. **SearXNG**: A metadata search engine used by Perplexica to search the web for sources. 3. **SearXNG**: A metadata search engine used by Perplexica to search the web for sources.
4. **LLMs (Large Language Models)**: Utilized by agents and chains for tasks like understanding content, writing responses, and citing sources. Examples include Claude, GPTs, etc. 4. **LLMs (Large Language Models)**: Utilized by agents and chains for tasks like understanding content, writing
5. **Embedding Models**: To improve the accuracy of search results, embedding models re-rank the results using similarity search algorithms such as cosine similarity and dot product distance. responses, and citing sources. Examples include Claude, GPTs, etc.
5. **Embedding Models**: To improve the accuracy of search results, embedding models re-rank the results using
similarity search algorithms such as cosine similarity and dot product distance.
For a more detailed explanation of how these components work together, see [WORKING.md](https://github.com/ItzCrazyKns/Perplexica/tree/master/docs/architecture/WORKING.md). For a more detailed explanation of how these components work together,
see [WORKING.md](https://github.com/ItzCrazyKns/Perplexica/tree/master/docs/architecture/WORKING.md).

View file

@ -1,19 +1,31 @@
## How does Perplexica work? ## How does Perplexica work?
Curious about how Perplexica works? Don't worry, we'll cover it here. Before we begin, make sure you've read about the architecture of Perplexica to ensure you understand what it's made up of. Haven't read it? You can read it [here](https://github.com/ItzCrazyKns/Perplexica/tree/master/docs/architecture/README.md). Curious about how Perplexica works? Don't worry, we'll cover it here. Before we begin, make sure you've read about the
architecture of Perplexica to ensure you understand what it's made up of. Haven't read it? You can read
it [here](https://github.com/ItzCrazyKns/Perplexica/tree/master/docs/architecture/README.md).
We'll understand how Perplexica works by taking an example of a scenario where a user asks: "How does an A.C. work?". We'll break down the process into steps to make it easier to understand. The steps are as follows: We'll understand how Perplexica works by taking an example of a scenario where a user asks: "How does an A.C. work?".
We'll break down the process into steps to make it easier to understand. The steps are as follows:
1. The message is sent via WS to the backend server where it invokes the chain. The chain will depend on your focus mode. For this example, let's assume we use the "webSearch" focus mode. 1. The message is sent via WS to the backend server where it invokes the chain. The chain will depend on your focus
2. The chain is now invoked; first, the message is passed to another chain where it first predicts (using the chat history and the question) whether there is a need for sources and searching the web. If there is, it will generate a query (in accordance with the chat history) for searching the web that we'll take up later. If not, the chain will end there, and then the answer generator chain, also known as the response generator, will be started. mode. For this example, let's assume we use the "webSearch" focus mode.
2. The chain is now invoked; first, the message is passed to another chain where it first predicts (using the chat
history and the question) whether there is a need for sources and searching the web. If there is, it will generate a
query (in accordance with the chat history) for searching the web that we'll take up later. If not, the chain will
end there, and then the answer generator chain, also known as the response generator, will be started.
3. The query returned by the first chain is passed to SearXNG to search the web for information. 3. The query returned by the first chain is passed to SearXNG to search the web for information.
4. After the information is retrieved, it is based on keyword-based search. We then convert the information into embeddings and the query as well, then we perform a similarity search to find the most relevant sources to answer the query. 4. After the information is retrieved, it is based on keyword-based search. We then convert the information into
5. After all this is done, the sources are passed to the response generator. This chain takes all the chat history, the query, and the sources. It generates a response that is streamed to the UI. embeddings and the query as well, then we perform a similarity search to find the most relevant sources to answer the
query.
5. After all this is done, the sources are passed to the response generator. This chain takes all the chat history, the
query, and the sources. It generates a response that is streamed to the UI.
### How are the answers cited? ### How are the answers cited?
The LLMs are prompted to do so. We've prompted them so well that they cite the answers themselves, and using some UI magic, we display it to the user. The LLMs are prompted to do so. We've prompted them so well that they cite the answers themselves, and using some UI
magic, we display it to the user.
### Image and Video Search ### Image and Video Search
Image and video searches are conducted in a similar manner. A query is always generated first, then we search the web for images and videos that match the query. These results are then returned to the user. Image and video searches are conducted in a similar manner. A query is always generated first, then we search the web
for images and videos that match the query. These results are then returned to the user.

View file

@ -1,6 +1,7 @@
# Expose Perplexica to a network # Expose Perplexica to a network
This guide will show you how to make Perplexica available over a network. Follow these steps to allow computers on the same network to interact with Perplexica. Choose the instructions that match the operating system you are using. This guide will show you how to make Perplexica available over a network. Follow these steps to allow computers on the
same network to interact with Perplexica. Choose the instructions that match the operating system you are using.
## Windows ## Windows

View file

@ -32,12 +32,12 @@ search:
# Existing autocomplete backends: "dbpedia", "duckduckgo", "google", "yandex", "mwmbl", # Existing autocomplete backends: "dbpedia", "duckduckgo", "google", "yandex", "mwmbl",
# "seznam", "startpage", "stract", "swisscows", "qwant", "wikipedia" - leave blank to turn it off # "seznam", "startpage", "stract", "swisscows", "qwant", "wikipedia" - leave blank to turn it off
# by default. # by default.
autocomplete: 'google' autocomplete: 'duckduckgo'
# minimun characters to type before autocompleter starts # minimun characters to type before autocompleter starts
autocomplete_min: 4 autocomplete_min: 3
# Default search language - leave blank to detect from browser information or # Default search language - leave blank to detect from browser information or
# use codes from 'languages.py' # use codes from 'languages.py'
default_lang: 'auto' default_lang: 'en-US'
# max_page: 0 # if engine supports paging, 0 means unlimited numbers of pages # max_page: 0 # if engine supports paging, 0 means unlimited numbers of pages
# Available languages # Available languages
# languages: # languages:
@ -213,15 +213,15 @@ outgoing:
# Comment or un-comment plugin to activate / deactivate by default. # Comment or un-comment plugin to activate / deactivate by default.
# #
# enabled_plugins: enabled_plugins:
# # these plugins are enabled if nothing is configured .. # # these plugins are enabled if nothing is configured ..
# - 'Hash plugin' # - 'Hash plugin'
# - 'Self Information' # - 'Self Information'
# - 'Tracker URL remover' # - 'Tracker URL remover'
# - 'Ahmia blacklist' # activation depends on outgoing.using_tor_proxy # - 'Ahmia blacklist' # activation depends on outgoing.using_tor_proxy
# # these plugins are disabled if nothing is configured .. # # these plugins are disabled if nothing is configured ..
# - 'Hostname replace' # see hostname_replace configuration below # - 'Hostname replace' # see hostname_replace configuration below
# - 'Open Access DOI rewrite' - 'Open Access DOI rewrite'
# - 'Tor check plugin' # - 'Tor check plugin'
# # Read the docs before activate: auto-detection of the language could be # # Read the docs before activate: auto-detection of the language could be
# # detrimental to users expectations / users can activate the plugin in the # # detrimental to users expectations / users can activate the plugin in the
@ -265,17 +265,17 @@ checker:
lang: en lang: en
result_container: result_container:
- not_empty - not_empty
- ['one_title_contains', 'citizen kane'] - [ 'one_title_contains', 'citizen kane' ]
test: test:
- unique_results - unique_results
android: &test_android android: &test_android
matrix: matrix:
query: ['android'] query: [ 'android' ]
lang: ['en', 'de', 'fr', 'zh-CN'] lang: [ 'en', 'de', 'fr', 'zh-CN' ]
result_container: result_container:
- not_empty - not_empty
- ['one_title_contains', 'google'] - [ 'one_title_contains', 'google' ]
test: test:
- unique_results - unique_results
@ -284,7 +284,7 @@ checker:
infobox: &tests_infobox infobox: &tests_infobox
infobox: infobox:
matrix: matrix:
query: ['linux', 'new york', 'bbc'] query: [ 'linux', 'new york', 'bbc' ]
result_container: result_container:
- has_infobox - has_infobox
@ -384,9 +384,9 @@ engines:
engine: wikipedia engine: wikipedia
shortcut: wp shortcut: wp
# add "list" to the array to get results in the results list # add "list" to the array to get results in the results list
display_type: ['infobox'] display_type: [ 'infobox' ]
base_url: 'https://{language}.wikipedia.org/' base_url: 'https://{language}.wikipedia.org/'
categories: [general] categories: [ general ]
- name: bilibili - name: bilibili
engine: bilibili engine: bilibili
@ -417,7 +417,7 @@ engines:
url_xpath: //article[@class="repo-summary"]//a[@class="repo-link"]/@href url_xpath: //article[@class="repo-summary"]//a[@class="repo-link"]/@href
title_xpath: //article[@class="repo-summary"]//a[@class="repo-link"] title_xpath: //article[@class="repo-summary"]//a[@class="repo-link"]
content_xpath: //article[@class="repo-summary"]/p content_xpath: //article[@class="repo-summary"]/p
categories: [it, repos] categories: [ it, repos ]
timeout: 4.0 timeout: 4.0
disabled: true disabled: true
shortcut: bb shortcut: bb
@ -593,7 +593,7 @@ engines:
- name: docker hub - name: docker hub
engine: docker_hub engine: docker_hub
shortcut: dh shortcut: dh
categories: [it, packages] categories: [ it, packages ]
- name: erowid - name: erowid
engine: xpath engine: xpath
@ -604,7 +604,7 @@ engines:
url_xpath: //dl[@class="results-list"]/dt[@class="result-title"]/a/@href url_xpath: //dl[@class="results-list"]/dt[@class="result-title"]/a/@href
title_xpath: //dl[@class="results-list"]/dt[@class="result-title"]/a/text() title_xpath: //dl[@class="results-list"]/dt[@class="result-title"]/a/text()
content_xpath: //dl[@class="results-list"]/dd[@class="result-details"] content_xpath: //dl[@class="results-list"]/dd[@class="result-details"]
categories: [] categories: [ ]
shortcut: ew shortcut: ew
disabled: true disabled: true
about: about:
@ -635,31 +635,32 @@ engines:
timeout: 3.0 timeout: 3.0
weight: 2 weight: 2
# add "list" to the array to get results in the results list # add "list" to the array to get results in the results list
display_type: ['infobox'] display_type: [ 'infobox' ]
tests: *tests_infobox tests: *tests_infobox
categories: [general] categories: [ general ]
- name: duckduckgo - name: duckduckgo
engine: duckduckgo engine: duckduckgo
shortcut: ddg shortcut: ddg
weight: 2.0
- name: duckduckgo images - name: duckduckgo images
engine: duckduckgo_extra engine: duckduckgo_extra
categories: [images, web] categories: [ images, web ]
ddg_category: images ddg_category: images
shortcut: ddi shortcut: ddi
disabled: true disabled: true
- name: duckduckgo videos - name: duckduckgo videos
engine: duckduckgo_extra engine: duckduckgo_extra
categories: [videos, web] categories: [ videos, web ]
ddg_category: videos ddg_category: videos
shortcut: ddv shortcut: ddv
disabled: true disabled: true
- name: duckduckgo news - name: duckduckgo news
engine: duckduckgo_extra engine: duckduckgo_extra
categories: [news, web] categories: [ news, web ]
ddg_category: news ddg_category: news
shortcut: ddn shortcut: ddn
disabled: true disabled: true
@ -696,7 +697,7 @@ engines:
content_xpath: //section[contains(@class, "word__defination")] content_xpath: //section[contains(@class, "word__defination")]
first_page_num: 1 first_page_num: 1
shortcut: et shortcut: et
categories: [dictionaries] categories: [ dictionaries ]
about: about:
website: https://www.etymonline.com/ website: https://www.etymonline.com/
wikidata_id: Q1188617 wikidata_id: Q1188617
@ -736,7 +737,7 @@ engines:
- name: free software directory - name: free software directory
engine: mediawiki engine: mediawiki
shortcut: fsd shortcut: fsd
categories: [it, software wikis] categories: [ it, software wikis ]
base_url: https://directory.fsf.org/ base_url: https://directory.fsf.org/
search_type: title search_type: title
timeout: 5.0 timeout: 5.0
@ -781,7 +782,7 @@ engines:
title_query: name_with_namespace title_query: name_with_namespace
content_query: description content_query: description
page_size: 20 page_size: 20
categories: [it, repos] categories: [ it, repos ]
shortcut: gl shortcut: gl
timeout: 10.0 timeout: 10.0
disabled: true disabled: true
@ -807,7 +808,7 @@ engines:
url_query: html_url url_query: html_url
title_query: name title_query: name
content_query: description content_query: description
categories: [it, repos] categories: [ it, repos ]
shortcut: cb shortcut: cb
disabled: true disabled: true
about: about:
@ -860,7 +861,7 @@ engines:
- name: google play apps - name: google play apps
engine: google_play engine: google_play
categories: [files, apps] categories: [ files, apps ]
shortcut: gpa shortcut: gpa
play_categ: apps play_categ: apps
disabled: true disabled: true
@ -932,7 +933,7 @@ engines:
url_xpath: './/div[@class="ans"]//a/@href' url_xpath: './/div[@class="ans"]//a/@href'
content_xpath: './/div[@class="from"]' content_xpath: './/div[@class="from"]'
page_size: 20 page_size: 20
categories: [it, packages] categories: [ it, packages ]
shortcut: ho shortcut: ho
about: about:
website: https://hoogle.haskell.org/ website: https://hoogle.haskell.org/
@ -1093,7 +1094,7 @@ engines:
- name: mdn - name: mdn
shortcut: mdn shortcut: mdn
engine: json_engine engine: json_engine
categories: [it] categories: [ it ]
paging: true paging: true
search_url: https://developer.mozilla.org/api/v1/search?q={query}&page={pageno} search_url: https://developer.mozilla.org/api/v1/search?q={query}&page={pageno}
results_query: documents results_query: documents
@ -1167,7 +1168,7 @@ engines:
title_query: package/name title_query: package/name
content_query: package/description content_query: package/description
page_size: 25 page_size: 25
categories: [it, packages] categories: [ it, packages ]
disabled: true disabled: true
timeout: 5.0 timeout: 5.0
shortcut: npm shortcut: npm
@ -1281,7 +1282,7 @@ engines:
url_query: url url_query: url
title_query: name title_query: name
content_query: description content_query: description
categories: [it, packages] categories: [ it, packages ]
disabled: true disabled: true
timeout: 5.0 timeout: 5.0
shortcut: pack shortcut: pack
@ -1355,7 +1356,7 @@ engines:
- name: presearch - name: presearch
engine: presearch engine: presearch
search_type: search search_type: search
categories: [general, web] categories: [ general, web ]
shortcut: ps shortcut: ps
timeout: 4.0 timeout: 4.0
disabled: true disabled: true
@ -1364,7 +1365,7 @@ engines:
engine: presearch engine: presearch
network: presearch network: presearch
search_type: images search_type: images
categories: [images, web] categories: [ images, web ]
timeout: 4.0 timeout: 4.0
shortcut: psimg shortcut: psimg
disabled: true disabled: true
@ -1373,7 +1374,7 @@ engines:
engine: presearch engine: presearch
network: presearch network: presearch
search_type: videos search_type: videos
categories: [general, web] categories: [ general, web ]
timeout: 4.0 timeout: 4.0
shortcut: psvid shortcut: psvid
disabled: true disabled: true
@ -1382,7 +1383,7 @@ engines:
engine: presearch engine: presearch
network: presearch network: presearch
search_type: news search_type: news
categories: [news, web] categories: [ news, web ]
timeout: 4.0 timeout: 4.0
shortcut: psnews shortcut: psnews
disabled: true disabled: true
@ -1396,7 +1397,7 @@ engines:
url_xpath: ./div/h3/a/@href url_xpath: ./div/h3/a/@href
title_xpath: ./div/h3/a title_xpath: ./div/h3/a
content_xpath: ./div/div/div[contains(@class,"packages-description")]/span content_xpath: ./div/div/div[contains(@class,"packages-description")]/span
categories: [packages, it] categories: [ packages, it ]
timeout: 3.0 timeout: 3.0
disabled: true disabled: true
first_page_num: 1 first_page_num: 1
@ -1423,7 +1424,7 @@ engines:
content_xpath: ./p content_xpath: ./p
suggestion_xpath: /html/body/main/div/div/div/form/div/div[@class="callout-block"]/p/span/a[@class="link"] suggestion_xpath: /html/body/main/div/div/div/form/div/div[@class="callout-block"]/p/span/a[@class="link"]
first_page_num: 1 first_page_num: 1
categories: [it, packages] categories: [ it, packages ]
about: about:
website: https://pypi.org website: https://pypi.org
wikidata_id: Q2984686 wikidata_id: Q2984686
@ -1436,7 +1437,7 @@ engines:
qwant_categ: web qwant_categ: web
engine: qwant engine: qwant
shortcut: qw shortcut: qw
categories: [general, web] categories: [ general, web ]
additional_tests: additional_tests:
rosebud: *test_rosebud rosebud: *test_rosebud
@ -1451,14 +1452,14 @@ engines:
qwant_categ: images qwant_categ: images
engine: qwant engine: qwant
shortcut: qwi shortcut: qwi
categories: [images, web] categories: [ images, web ]
network: qwant network: qwant
- name: qwant videos - name: qwant videos
qwant_categ: videos qwant_categ: videos
engine: qwant engine: qwant
shortcut: qwv shortcut: qwv
categories: [videos, web] categories: [ videos, web ]
network: qwant network: qwant
# - name: library # - name: library
@ -1526,13 +1527,13 @@ engines:
engine: stackexchange engine: stackexchange
shortcut: st shortcut: st
api_site: 'stackoverflow' api_site: 'stackoverflow'
categories: [it, q&a] categories: [ it, q&a ]
- name: askubuntu - name: askubuntu
engine: stackexchange engine: stackexchange
shortcut: ubuntu shortcut: ubuntu
api_site: 'askubuntu' api_site: 'askubuntu'
categories: [it, q&a] categories: [ it, q&a ]
- name: internetarchivescholar - name: internetarchivescholar
engine: internet_archive_scholar engine: internet_archive_scholar
@ -1543,7 +1544,7 @@ engines:
engine: stackexchange engine: stackexchange
shortcut: su shortcut: su
api_site: 'superuser' api_site: 'superuser'
categories: [it, q&a] categories: [ it, q&a ]
- name: searchcode code - name: searchcode code
engine: searchcode_code engine: searchcode_code
@ -1737,7 +1738,7 @@ engines:
url_query: URL url_query: URL
title_query: Title title_query: Title
content_query: Snippet content_query: Snippet
categories: [general, web] categories: [ general, web ]
shortcut: wib shortcut: wib
disabled: true disabled: true
about: about:
@ -1766,7 +1767,7 @@ engines:
engine: mediawiki engine: mediawiki
weight: 0.5 weight: 0.5
shortcut: wb shortcut: wb
categories: [general, wikimedia] categories: [ general, wikimedia ]
base_url: 'https://{language}.wikibooks.org/' base_url: 'https://{language}.wikibooks.org/'
search_type: text search_type: text
disabled: true disabled: true
@ -1777,7 +1778,7 @@ engines:
- name: wikinews - name: wikinews
engine: mediawiki engine: mediawiki
shortcut: wn shortcut: wn
categories: [news, wikimedia] categories: [ news, wikimedia ]
base_url: 'https://{language}.wikinews.org/' base_url: 'https://{language}.wikinews.org/'
search_type: text search_type: text
srsort: create_timestamp_desc srsort: create_timestamp_desc
@ -1789,7 +1790,7 @@ engines:
engine: mediawiki engine: mediawiki
weight: 0.5 weight: 0.5
shortcut: wq shortcut: wq
categories: [general, wikimedia] categories: [ general, wikimedia ]
base_url: 'https://{language}.wikiquote.org/' base_url: 'https://{language}.wikiquote.org/'
search_type: text search_type: text
disabled: true disabled: true
@ -1803,7 +1804,7 @@ engines:
engine: mediawiki engine: mediawiki
weight: 0.5 weight: 0.5
shortcut: ws shortcut: ws
categories: [general, wikimedia] categories: [ general, wikimedia ]
base_url: 'https://{language}.wikisource.org/' base_url: 'https://{language}.wikisource.org/'
search_type: text search_type: text
disabled: true disabled: true
@ -1814,7 +1815,7 @@ engines:
- name: wikispecies - name: wikispecies
engine: mediawiki engine: mediawiki
shortcut: wsp shortcut: wsp
categories: [general, science, wikimedia] categories: [ general, science, wikimedia ]
base_url: 'https://species.wikimedia.org/' base_url: 'https://species.wikimedia.org/'
search_type: text search_type: text
disabled: true disabled: true
@ -1825,7 +1826,7 @@ engines:
- name: wiktionary - name: wiktionary
engine: mediawiki engine: mediawiki
shortcut: wt shortcut: wt
categories: [dictionaries, wikimedia] categories: [ dictionaries, wikimedia ]
base_url: 'https://{language}.wiktionary.org/' base_url: 'https://{language}.wiktionary.org/'
search_type: text search_type: text
about: about:
@ -1836,7 +1837,7 @@ engines:
engine: mediawiki engine: mediawiki
weight: 0.5 weight: 0.5
shortcut: wv shortcut: wv
categories: [general, wikimedia] categories: [ general, wikimedia ]
base_url: 'https://{language}.wikiversity.org/' base_url: 'https://{language}.wikiversity.org/'
search_type: text search_type: text
disabled: true disabled: true
@ -1848,7 +1849,7 @@ engines:
engine: mediawiki engine: mediawiki
weight: 0.5 weight: 0.5
shortcut: wy shortcut: wy
categories: [general, wikimedia] categories: [ general, wikimedia ]
base_url: 'https://{language}.wikivoyage.org/' base_url: 'https://{language}.wikivoyage.org/'
search_type: text search_type: text
disabled: true disabled: true
@ -1926,7 +1927,7 @@ engines:
shortcut: mjk shortcut: mjk
engine: xpath engine: xpath
paging: true paging: true
categories: [general, web] categories: [ general, web ]
search_url: https://www.mojeek.com/search?q={query}&s={pageno}&lang={lang}&lb={lang} search_url: https://www.mojeek.com/search?q={query}&s={pageno}&lang={lang}&lb={lang}
results_xpath: //ul[@class="results-standard"]/li/a[@class="ob"] results_xpath: //ul[@class="results-standard"]/li/a[@class="ob"]
url_xpath: ./@href url_xpath: ./@href
@ -1952,7 +1953,7 @@ engines:
- name: naver - name: naver
shortcut: nvr shortcut: nvr
categories: [general, web] categories: [ general, web ]
engine: xpath engine: xpath
paging: true paging: true
search_url: https://search.naver.com/search.naver?where=webkr&sm=osp_hty&ie=UTF-8&query={query}&start={pageno} search_url: https://search.naver.com/search.naver?where=webkr&sm=osp_hty&ie=UTF-8&query={query}&start={pageno}
@ -1982,7 +1983,7 @@ engines:
content_xpath: ./span/p content_xpath: ./span/p
suggestion_xpath: /html/body/main/div/div[@class="search__suggestions"]/p/a suggestion_xpath: /html/body/main/div/div[@class="search__suggestions"]/p/a
first_page_num: 1 first_page_num: 1
categories: [it, packages] categories: [ it, packages ]
disabled: true disabled: true
about: about:
website: https://rubygems.org/ website: https://rubygems.org/
@ -2047,20 +2048,20 @@ engines:
engine: wordnik engine: wordnik
shortcut: def shortcut: def
base_url: https://www.wordnik.com/ base_url: https://www.wordnik.com/
categories: [dictionaries] categories: [ dictionaries ]
timeout: 5.0 timeout: 5.0
- name: woxikon.de synonyme - name: woxikon.de synonyme
engine: xpath engine: xpath
shortcut: woxi shortcut: woxi
categories: [dictionaries] categories: [ dictionaries ]
timeout: 5.0 timeout: 5.0
disabled: true disabled: true
search_url: https://synonyme.woxikon.de/synonyme/{query}.php search_url: https://synonyme.woxikon.de/synonyme/{query}.php
url_xpath: //div[@class="upper-synonyms"]/a/@href url_xpath: //div[@class="upper-synonyms"]/a/@href
content_xpath: //div[@class="synonyms-list-group"] content_xpath: //div[@class="synonyms-list-group"]
title_xpath: //div[@class="upper-synonyms"]/a title_xpath: //div[@class="upper-synonyms"]/a
no_result_for_http_status: [404] no_result_for_http_status: [ 404 ]
about: about:
website: https://www.woxikon.de/ website: https://www.woxikon.de/
wikidata_id: # No Wikidata ID wikidata_id: # No Wikidata ID
@ -2154,7 +2155,7 @@ engines:
shortcut: br shortcut: br
time_range_support: true time_range_support: true
paging: true paging: true
categories: [general, web] categories: [ general, web ]
brave_category: search brave_category: search
# brave_spellcheck: true # brave_spellcheck: true
@ -2162,14 +2163,14 @@ engines:
engine: brave engine: brave
network: brave network: brave
shortcut: brimg shortcut: brimg
categories: [images, web] categories: [ images, web ]
brave_category: images brave_category: images
- name: brave.videos - name: brave.videos
engine: brave engine: brave
network: brave network: brave
shortcut: brvid shortcut: brvid
categories: [videos, web] categories: [ videos, web ]
brave_category: videos brave_category: videos
- name: brave.news - name: brave.news
@ -2197,7 +2198,7 @@ engines:
url_xpath: ./@href url_xpath: ./@href
title_xpath: ./div[@class="h"]/h4 title_xpath: ./div[@class="h"]/h4
content_xpath: ./div[@class="h"]/p content_xpath: ./div[@class="h"]/p
categories: [it, packages] categories: [ it, packages ]
disabled: true disabled: true
about: about:
website: https://lib.rs website: https://lib.rs
@ -2216,7 +2217,7 @@ engines:
title_xpath: ./h4/a[2] title_xpath: ./h4/a[2]
content_xpath: ./p content_xpath: ./p
first_page_num: 1 first_page_num: 1
categories: [it, repos] categories: [ it, repos ]
disabled: true disabled: true
about: about:
website: https://sr.ht website: https://sr.ht
@ -2235,7 +2236,7 @@ engines:
title_xpath: //div[@class="result"]/p[@class='title fsL1']/a title_xpath: //div[@class="result"]/p[@class='title fsL1']/a
content_xpath: //p[contains(@class,'url fsM')]/following-sibling::p content_xpath: //p[contains(@class,'url fsM')]/following-sibling::p
first_page_num: 0 first_page_num: 0
categories: [general, web] categories: [ general, web ]
disabled: true disabled: true
timeout: 4.0 timeout: 4.0
about: about:
@ -2258,7 +2259,7 @@ engines:
url_xpath: ./div[@class="SearchSnippet-headerContainer"]/h2/a/@href url_xpath: ./div[@class="SearchSnippet-headerContainer"]/h2/a/@href
title_xpath: ./div[@class="SearchSnippet-headerContainer"]/h2/a title_xpath: ./div[@class="SearchSnippet-headerContainer"]/h2/a
content_xpath: ./p[@class="SearchSnippet-synopsis"] content_xpath: ./p[@class="SearchSnippet-synopsis"]
categories: [packages, it] categories: [ packages, it ]
timeout: 3.0 timeout: 3.0
disabled: true disabled: true
about: about:

View file

@ -1,13 +1,13 @@
import { BaseMessage } from '@langchain/core/messages'; import { BaseMessage } from '@langchain/core/messages';
import { import {
PromptTemplate,
ChatPromptTemplate, ChatPromptTemplate,
MessagesPlaceholder, MessagesPlaceholder,
PromptTemplate,
} from '@langchain/core/prompts'; } from '@langchain/core/prompts';
import { import {
RunnableSequence,
RunnableMap,
RunnableLambda, RunnableLambda,
RunnableMap,
RunnableSequence,
} from '@langchain/core/runnables'; } from '@langchain/core/runnables';
import { StringOutputParser } from '@langchain/core/output_parsers'; import { StringOutputParser } from '@langchain/core/output_parsers';
import { Document } from '@langchain/core/documents'; import { Document } from '@langchain/core/documents';

View file

@ -1,7 +1,7 @@
import { import {
RunnableSequence,
RunnableMap,
RunnableLambda, RunnableLambda,
RunnableMap,
RunnableSequence,
} from '@langchain/core/runnables'; } from '@langchain/core/runnables';
import { PromptTemplate } from '@langchain/core/prompts'; import { PromptTemplate } from '@langchain/core/prompts';
import formatChatHistoryAsString from '../utils/formatHistory'; import formatChatHistoryAsString from '../utils/formatHistory';

View file

@ -1,13 +1,13 @@
import { BaseMessage } from '@langchain/core/messages'; import { BaseMessage } from '@langchain/core/messages';
import { import {
PromptTemplate,
ChatPromptTemplate, ChatPromptTemplate,
MessagesPlaceholder, MessagesPlaceholder,
PromptTemplate,
} from '@langchain/core/prompts'; } from '@langchain/core/prompts';
import { import {
RunnableSequence,
RunnableMap,
RunnableLambda, RunnableLambda,
RunnableMap,
RunnableSequence,
} from '@langchain/core/runnables'; } from '@langchain/core/runnables';
import { StringOutputParser } from '@langchain/core/output_parsers'; import { StringOutputParser } from '@langchain/core/output_parsers';
import { Document } from '@langchain/core/documents'; import { Document } from '@langchain/core/documents';

View file

@ -1,10 +1,10 @@
import {RunnableMap, RunnableSequence} from '@langchain/core/runnables'; import { RunnableMap, RunnableSequence } from '@langchain/core/runnables';
import ListLineOutputParser from '../lib/outputParsers/listLineOutputParser'; import ListLineOutputParser from '../lib/outputParsers/listLineOutputParser';
import {PromptTemplate} from '@langchain/core/prompts'; import { PromptTemplate } from '@langchain/core/prompts';
import formatChatHistoryAsString from '../utils/formatHistory'; import formatChatHistoryAsString from '../utils/formatHistory';
import {BaseMessage} from '@langchain/core/messages'; import { BaseMessage } from '@langchain/core/messages';
import {BaseChatModel} from '@langchain/core/language_models/chat_models'; import { BaseChatModel } from '@langchain/core/language_models/chat_models';
import {ChatOpenAI} from '@langchain/openai'; import { ChatOpenAI } from '@langchain/openai';
const suggestionGeneratorPrompt = ` const suggestionGeneratorPrompt = `
You are an AI suggestion generator for an AI powered search engine. You will be given a conversation below. You need to generate 4-5 suggestions based on the conversation. The suggestion should be relevant to the conversation that can be used by the user to ask the chat model for more information. You are an AI suggestion generator for an AI powered search engine. You will be given a conversation below. You need to generate 4-5 suggestions based on the conversation. The suggestion should be relevant to the conversation that can be used by the user to ask the chat model for more information.
@ -48,7 +48,9 @@ const generateSuggestions = (
llm: ChatOpenAI, llm: ChatOpenAI,
) => { ) => {
llm.temperature = 0; llm.temperature = 0;
const suggestionGeneratorChain = createSuggestionGeneratorChain(llm as unknown as BaseChatModel); const suggestionGeneratorChain = createSuggestionGeneratorChain(
llm as unknown as BaseChatModel,
);
return suggestionGeneratorChain.invoke(input); return suggestionGeneratorChain.invoke(input);
}; };

View file

@ -1,7 +1,7 @@
import { import {
RunnableSequence,
RunnableMap,
RunnableLambda, RunnableLambda,
RunnableMap,
RunnableSequence,
} from '@langchain/core/runnables'; } from '@langchain/core/runnables';
import { PromptTemplate } from '@langchain/core/prompts'; import { PromptTemplate } from '@langchain/core/prompts';
import formatChatHistoryAsString from '../utils/formatHistory'; import formatChatHistoryAsString from '../utils/formatHistory';

View file

@ -1,13 +1,13 @@
import { BaseMessage } from '@langchain/core/messages'; import { BaseMessage } from '@langchain/core/messages';
import { import {
PromptTemplate,
ChatPromptTemplate, ChatPromptTemplate,
MessagesPlaceholder, MessagesPlaceholder,
PromptTemplate,
} from '@langchain/core/prompts'; } from '@langchain/core/prompts';
import { import {
RunnableSequence,
RunnableMap,
RunnableLambda, RunnableLambda,
RunnableMap,
RunnableSequence,
} from '@langchain/core/runnables'; } from '@langchain/core/runnables';
import { StringOutputParser } from '@langchain/core/output_parsers'; import { StringOutputParser } from '@langchain/core/output_parsers';
import { Document } from '@langchain/core/documents'; import { Document } from '@langchain/core/documents';

View file

@ -1,13 +1,13 @@
import { BaseMessage } from '@langchain/core/messages'; import { BaseMessage } from '@langchain/core/messages';
import { import {
PromptTemplate,
ChatPromptTemplate, ChatPromptTemplate,
MessagesPlaceholder, MessagesPlaceholder,
PromptTemplate,
} from '@langchain/core/prompts'; } from '@langchain/core/prompts';
import { import {
RunnableSequence,
RunnableMap,
RunnableLambda, RunnableLambda,
RunnableMap,
RunnableSequence,
} from '@langchain/core/runnables'; } from '@langchain/core/runnables';
import { StringOutputParser } from '@langchain/core/output_parsers'; import { StringOutputParser } from '@langchain/core/output_parsers';
import { Document } from '@langchain/core/documents'; import { Document } from '@langchain/core/documents';

View file

@ -1,13 +1,13 @@
import { BaseMessage } from '@langchain/core/messages'; import { BaseMessage } from '@langchain/core/messages';
import { import {
PromptTemplate,
ChatPromptTemplate, ChatPromptTemplate,
MessagesPlaceholder, MessagesPlaceholder,
PromptTemplate,
} from '@langchain/core/prompts'; } from '@langchain/core/prompts';
import { import {
RunnableSequence,
RunnableMap,
RunnableLambda, RunnableLambda,
RunnableMap,
RunnableSequence,
} from '@langchain/core/runnables'; } from '@langchain/core/runnables';
import { StringOutputParser } from '@langchain/core/output_parsers'; import { StringOutputParser } from '@langchain/core/output_parsers';
import { Document } from '@langchain/core/documents'; import { Document } from '@langchain/core/documents';

View file

@ -5,6 +5,7 @@ interface LineListOutputParserArgs {
} }
class LineListOutputParser extends BaseOutputParser<string[]> { class LineListOutputParser extends BaseOutputParser<string[]> {
lc_namespace = ['langchain', 'output_parsers', 'line_list_output_parser'];
private key = 'questions'; private key = 'questions';
constructor(args?: LineListOutputParserArgs) { constructor(args?: LineListOutputParserArgs) {
@ -16,8 +17,6 @@ class LineListOutputParser extends BaseOutputParser<string[]> {
return 'LineListOutputParser'; return 'LineListOutputParser';
} }
lc_namespace = ['langchain', 'output_parsers', 'line_list_output_parser'];
async parse(text: string): Promise<string[]> { async parse(text: string): Promise<string[]> {
const regex = /^(\s*(-|\*|\d+\.\s|\d+\)\s|\u2022)\s*)+/; const regex = /^(\s*(-|\*|\d+\.\s|\d+\)\s|\u2022)\s*)+/;
const startKeyIndex = text.indexOf(`<${this.key}>`); const startKeyIndex = text.indexOf(`<${this.key}>`);

View file

@ -1,236 +1,241 @@
import {ChatOpenAI, OpenAIEmbeddings} from '@langchain/openai'; import { ChatOpenAI, OpenAIEmbeddings } from '@langchain/openai';
import {ChatOllama} from '@langchain/community/chat_models/ollama'; import { ChatOllama } from '@langchain/community/chat_models/ollama';
import {OllamaEmbeddings} from '@langchain/community/embeddings/ollama'; import { OllamaEmbeddings } from '@langchain/community/embeddings/ollama';
import {HuggingFaceTransformersEmbeddings} from './huggingfaceTransformer'; import { HuggingFaceTransformersEmbeddings } from './huggingfaceTransformer';
import { import {
getCustomEmbeddingModels, getCustomEmbeddingModels,
getCustomModels, getCustomModels,
getGroqApiKey, getGroqApiKey,
getOllamaApiEndpoint, getOllamaApiEndpoint,
getOpenaiApiKey, getOpenaiApiKey,
} from '../config'; } from '../config';
import logger from '../utils/logger'; import logger from '../utils/logger';
export const getAvailableChatModelProviders = async () => { export const getAvailableChatModelProviders = async () => {
const openAIApiKey = getOpenaiApiKey(); const openAIApiKey = getOpenaiApiKey();
const groqApiKey = getGroqApiKey(); const groqApiKey = getGroqApiKey();
const ollamaEndpoint = getOllamaApiEndpoint(); const ollamaEndpoint = getOllamaApiEndpoint();
const customModels = getCustomModels(); const customModels = getCustomModels();
const models = {}; const models = {};
if (openAIApiKey) { if (openAIApiKey) {
try { try {
models['openai'] = { models['openai'] = {
'GPT-3.5 turbo': new ChatOpenAI({ 'GPT-3.5 turbo': new ChatOpenAI({
openAIApiKey, openAIApiKey,
modelName: 'gpt-3.5-turbo', modelName: 'gpt-3.5-turbo',
temperature: 0.7, temperature: 0.7,
}), }),
'GPT-4': new ChatOpenAI({ 'GPT-4': new ChatOpenAI({
openAIApiKey, openAIApiKey,
modelName: 'gpt-4', modelName: 'gpt-4',
temperature: 0.7, temperature: 0.7,
}), }),
'GPT-4 turbo': new ChatOpenAI({ 'GPT-4 turbo': new ChatOpenAI({
openAIApiKey, openAIApiKey,
modelName: 'gpt-4-turbo', modelName: 'gpt-4-turbo',
temperature: 0.7, temperature: 0.7,
}), }),
'GPT-4 omni': new ChatOpenAI({ 'GPT-4 omni': new ChatOpenAI({
openAIApiKey, openAIApiKey,
modelName: 'gpt-4o', modelName: 'gpt-4o',
temperature: 0.7, temperature: 0.7,
}), }),
}; };
} catch (err) { } catch (err) {
logger.error(`Error loading OpenAI models: ${err}`); logger.error(`Error loading OpenAI models: ${err}`);
}
} }
}
if (groqApiKey) { if (groqApiKey) {
try { try {
models['groq'] = { models['groq'] = {
'LLaMA3 8b': new ChatOpenAI( 'LLaMA3 8b': new ChatOpenAI(
{ {
openAIApiKey: groqApiKey, openAIApiKey: groqApiKey,
modelName: 'llama3-8b-8192', modelName: 'llama3-8b-8192',
temperature: 0.7, temperature: 0.7,
}, },
{ {
baseURL: 'https://api.groq.com/openai/v1', baseURL: 'https://api.groq.com/openai/v1',
}, },
), ),
'LLaMA3 70b': new ChatOpenAI( 'LLaMA3 70b': new ChatOpenAI(
{ {
openAIApiKey: groqApiKey, openAIApiKey: groqApiKey,
modelName: 'llama3-70b-8192', modelName: 'llama3-70b-8192',
temperature: 0.7, temperature: 0.7,
}, },
{ {
baseURL: 'https://api.groq.com/openai/v1', baseURL: 'https://api.groq.com/openai/v1',
}, },
), ),
'Mixtral 8x7b': new ChatOpenAI( 'Mixtral 8x7b': new ChatOpenAI(
{ {
openAIApiKey: groqApiKey, openAIApiKey: groqApiKey,
modelName: 'mixtral-8x7b-32768', modelName: 'mixtral-8x7b-32768',
temperature: 0.7, temperature: 0.7,
}, },
{ {
baseURL: 'https://api.groq.com/openai/v1', baseURL: 'https://api.groq.com/openai/v1',
}, },
), ),
'Gemma 7b': new ChatOpenAI( 'Gemma 7b': new ChatOpenAI(
{ {
openAIApiKey: groqApiKey, openAIApiKey: groqApiKey,
modelName: 'gemma-7b-it', modelName: 'gemma-7b-it',
temperature: 0.7, temperature: 0.7,
}, },
{ {
baseURL: 'https://api.groq.com/openai/v1', baseURL: 'https://api.groq.com/openai/v1',
}, },
), ),
}; };
} catch (err) { } catch (err) {
logger.error(`Error loading Groq models: ${err}`); logger.error(`Error loading Groq models: ${err}`);
}
} }
}
if (ollamaEndpoint) { if (ollamaEndpoint) {
try { try {
const response = await fetch(`${ollamaEndpoint}/api/tags`, { const response = await fetch(`${ollamaEndpoint}/api/tags`, {
headers: { headers: {
'Content-Type': 'application/json', 'Content-Type': 'application/json',
}, },
}); });
const {models: ollamaModels} = (await response.json()) as any; const { models: ollamaModels } = (await response.json()) as any;
models['ollama'] = ollamaModels.reduce((acc, model) => { models['ollama'] = ollamaModels.reduce((acc, model) => {
acc[model.model] = new ChatOllama({ acc[model.model] = new ChatOllama({
baseUrl: ollamaEndpoint, baseUrl: ollamaEndpoint,
model: model.model, model: model.model,
temperature: 0.7, temperature: 0.7,
}); });
return acc; return acc;
}, {}); }, {});
} catch (err) { } catch (err) {
logger.error(`Error loading Ollama models: ${err}`); logger.error(`Error loading Ollama models: ${err}`);
}
} }
}
models['custom_openai'] = {}; models['custom_openai'] = {};
if (customModels && customModels.length > 0) { if (customModels && customModels.length > 0) {
models['custom'] = {}; models['custom'] = {};
try { try {
customModels.forEach((model) => { customModels.forEach((model) => {
if (model.provider === "openai") { if (model.provider === 'openai') {
models['custom'] = { models['custom'] = {
...models['custom'], ...models['custom'],
[model.name]: new ChatOpenAI({ [model.name]: new ChatOpenAI({
openAIApiKey: model.api_key, openAIApiKey: model.api_key,
modelName: model.name, modelName: model.name,
temperature: 0.7, temperature: 0.7,
configuration: { configuration: {
baseURL: model.base_url, baseURL: model.base_url,
} },
}) }),
} };
}
});
} catch (err) {
logger.error(`Error loading custom models: ${err}`);
} }
});
} catch (err) {
logger.error(`Error loading custom models: ${err}`);
} }
}
return models; return models;
}; };
export const getAvailableEmbeddingModelProviders = async () => { export const getAvailableEmbeddingModelProviders = async () => {
const openAIApiKey = getOpenaiApiKey(); const openAIApiKey = getOpenaiApiKey();
const ollamaEndpoint = getOllamaApiEndpoint(); const ollamaEndpoint = getOllamaApiEndpoint();
const customEmbeddingModels = getCustomEmbeddingModels(); const customEmbeddingModels = getCustomEmbeddingModels();
const models = {}; const models = {};
if (openAIApiKey) {
try {
models['openai'] = {
'Text embedding 3 small': new OpenAIEmbeddings({
openAIApiKey,
modelName: 'text-embedding-3-small',
}, {baseURL: "http://10.0.1.2:5000/v1"}),
'Text embedding 3 large': new OpenAIEmbeddings({
openAIApiKey,
modelName: 'text-embedding-3-large',
}),
};
} catch (err) {
logger.error(`Error loading OpenAI embeddings: ${err}`);
}
}
if (ollamaEndpoint) {
try {
const response = await fetch(`${ollamaEndpoint}/api/tags`, {
headers: {
'Content-Type': 'application/json',
},
});
const {models: ollamaModels} = (await response.json()) as any;
models['ollama'] = ollamaModels.reduce((acc, model) => {
acc[model.model] = new OllamaEmbeddings({
baseUrl: ollamaEndpoint,
model: model.model,
});
return acc;
}, {});
} catch (err) {
logger.error(`Error loading Ollama embeddings: ${err}`);
}
}
if (customEmbeddingModels && customEmbeddingModels.length > 0) {
models['custom'] = {};
try {
customEmbeddingModels.forEach((model) => {
if (model.provider === "openai") {
models['custom'] = {
...models['custom'],
[model.name]: new OpenAIEmbeddings({
openAIApiKey: model.api_key,
modelName: model.model,
},
{
baseURL: model.base_url,
}),
}
}
});
} catch (err) {
logger.error(`Error loading custom models: ${err}`);
}
}
if (openAIApiKey) {
try { try {
models['local'] = { models['openai'] = {
'BGE Small': new HuggingFaceTransformersEmbeddings({ 'Text embedding 3 small': new OpenAIEmbeddings(
modelName: 'Xenova/bge-small-en-v1.5', {
}), openAIApiKey,
'GTE Small': new HuggingFaceTransformersEmbeddings({ modelName: 'text-embedding-3-small',
modelName: 'Xenova/gte-small', },
}), { baseURL: 'http://10.0.1.2:5000/v1' },
'Bert Multilingual': new HuggingFaceTransformersEmbeddings({ ),
modelName: 'Xenova/bert-base-multilingual-uncased', 'Text embedding 3 large': new OpenAIEmbeddings({
}), openAIApiKey,
}; modelName: 'text-embedding-3-large',
}),
};
} catch (err) { } catch (err) {
logger.error(`Error loading local embeddings: ${err}`); logger.error(`Error loading OpenAI embeddings: ${err}`);
} }
}
return models; if (ollamaEndpoint) {
try {
const response = await fetch(`${ollamaEndpoint}/api/tags`, {
headers: {
'Content-Type': 'application/json',
},
});
const { models: ollamaModels } = (await response.json()) as any;
models['ollama'] = ollamaModels.reduce((acc, model) => {
acc[model.model] = new OllamaEmbeddings({
baseUrl: ollamaEndpoint,
model: model.model,
});
return acc;
}, {});
} catch (err) {
logger.error(`Error loading Ollama embeddings: ${err}`);
}
}
if (customEmbeddingModels && customEmbeddingModels.length > 0) {
models['custom'] = {};
try {
customEmbeddingModels.forEach((model) => {
if (model.provider === 'openai') {
models['custom'] = {
...models['custom'],
[model.name]: new OpenAIEmbeddings(
{
openAIApiKey: model.api_key,
modelName: model.model,
},
{
baseURL: model.base_url,
},
),
};
}
});
} catch (err) {
logger.error(`Error loading custom models: ${err}`);
}
}
try {
models['local'] = {
'BGE Small': new HuggingFaceTransformersEmbeddings({
modelName: 'Xenova/bge-small-en-v1.5',
}),
'GTE Small': new HuggingFaceTransformersEmbeddings({
modelName: 'Xenova/gte-small',
}),
'Bert Multilingual': new HuggingFaceTransformersEmbeddings({
modelName: 'Xenova/bert-base-multilingual-uncased',
}),
};
} catch (err) {
logger.error(`Error loading local embeddings: ${err}`);
}
return models;
}; };

View file

@ -2,7 +2,7 @@ import express from 'express';
import handleImageSearch from '../agents/imageSearchAgent'; import handleImageSearch from '../agents/imageSearchAgent';
import { BaseChatModel } from '@langchain/core/language_models/chat_models'; import { BaseChatModel } from '@langchain/core/language_models/chat_models';
import { getAvailableChatModelProviders } from '../lib/providers'; import { getAvailableChatModelProviders } from '../lib/providers';
import { HumanMessage, AIMessage } from '@langchain/core/messages'; import { AIMessage, HumanMessage } from '@langchain/core/messages';
import logger from '../utils/logger'; import logger from '../utils/logger';
const router = express.Router(); const router = express.Router();

View file

@ -1,8 +1,8 @@
import express from 'express'; import express from 'express';
import generateSuggestions from '../agents/suggestionGeneratorAgent'; import generateSuggestions from '../agents/suggestionGeneratorAgent';
import {BaseChatModel} from '@langchain/core/language_models/chat_models'; import { BaseChatModel } from '@langchain/core/language_models/chat_models';
import {getAvailableChatModelProviders} from '../lib/providers'; import { getAvailableChatModelProviders } from '../lib/providers';
import {AIMessage, HumanMessage} from '@langchain/core/messages'; import { AIMessage, HumanMessage } from '@langchain/core/messages';
import logger from '../utils/logger'; import logger from '../utils/logger';
const router = express.Router(); const router = express.Router();

View file

@ -1,7 +1,7 @@
import express from 'express'; import express from 'express';
import { BaseChatModel } from '@langchain/core/language_models/chat_models'; import { BaseChatModel } from '@langchain/core/language_models/chat_models';
import { getAvailableChatModelProviders } from '../lib/providers'; import { getAvailableChatModelProviders } from '../lib/providers';
import { HumanMessage, AIMessage } from '@langchain/core/messages'; import { AIMessage, HumanMessage } from '@langchain/core/messages';
import logger from '../utils/logger'; import logger from '../utils/logger';
import handleVideoSearch from '../agents/videoSearchAgent'; import handleVideoSearch from '../agents/videoSearchAgent';

View file

@ -1,11 +1,14 @@
import {WebSocket} from 'ws'; import { WebSocket } from 'ws';
import {handleMessage} from './messageHandler'; import { handleMessage } from './messageHandler';
import {getAvailableChatModelProviders, getAvailableEmbeddingModelProviders,} from '../lib/providers'; import {
import {BaseChatModel} from '@langchain/core/language_models/chat_models'; getAvailableChatModelProviders,
import type {Embeddings} from '@langchain/core/embeddings'; getAvailableEmbeddingModelProviders,
import type {IncomingMessage} from 'http'; } from '../lib/providers';
import { BaseChatModel } from '@langchain/core/language_models/chat_models';
import type { Embeddings } from '@langchain/core/embeddings';
import type { IncomingMessage } from 'http';
import logger from '../utils/logger'; import logger from '../utils/logger';
import {ChatOpenAI} from '@langchain/openai'; import { ChatOpenAI } from '@langchain/openai';
export const handleConnection = async ( export const handleConnection = async (
ws: WebSocket, ws: WebSocket,

View file

@ -1,5 +1,5 @@
import { EventEmitter, WebSocket } from 'ws'; import { EventEmitter, WebSocket } from 'ws';
import { BaseMessage, AIMessage, HumanMessage } from '@langchain/core/messages'; import { AIMessage, BaseMessage, HumanMessage } from '@langchain/core/messages';
import handleWebSearch from '../agents/webSearchAgent'; import handleWebSearch from '../agents/webSearchAgent';
import handleAcademicSearch from '../agents/academicSearchAgent'; import handleAcademicSearch from '../agents/academicSearchAgent';
import handleWritingAssistant from '../agents/writingAssistant'; import handleWritingAssistant from '../agents/writingAssistant';

View file

@ -1,6 +1,8 @@
{ {
"compilerOptions": { "compilerOptions": {
"lib": ["ESNext"], "lib": [
"ESNext"
],
"module": "Node16", "module": "Node16",
"moduleResolution": "Node16", "moduleResolution": "Node16",
"target": "ESNext", "target": "ESNext",
@ -13,6 +15,11 @@
"skipLibCheck": true, "skipLibCheck": true,
"skipDefaultLibCheck": true "skipDefaultLibCheck": true
}, },
"include": ["src"], "include": [
"exclude": ["node_modules", "**/*.spec.ts"] "src"
],
"exclude": [
"node_modules",
"**/*.spec.ts"
]
} }

View file

@ -3,11 +3,11 @@
@tailwind utilities; @tailwind utilities;
@layer base { @layer base {
.overflow-hidden-scrollable { .overflow-hidden-scrollable {
-ms-overflow-style: none; -ms-overflow-style: none;
} }
.overflow-hidden-scrollable::-webkit-scrollbar { .overflow-hidden-scrollable::-webkit-scrollbar {
display: none; display: none;
} }
} }

View file

@ -1,7 +1,7 @@
/* eslint-disable @next/next/no-img-element */ /* eslint-disable @next/next/no-img-element */
import { Dialog, Transition } from '@headlessui/react'; import {Dialog, Transition} from '@headlessui/react';
import { Document } from '@langchain/core/documents'; import {Document} from '@langchain/core/documents';
import { Fragment, useState } from 'react'; import {Fragment, useState} from 'react';
const MessageSources = ({ sources }: { sources: Document[] }) => { const MessageSources = ({ sources }: { sources: Document[] }) => {
const [isDialogOpen, setIsDialogOpen] = useState(false); const [isDialogOpen, setIsDialogOpen] = useState(false);

View file

@ -1,9 +1,9 @@
/* eslint-disable @next/next/no-img-element */ /* eslint-disable @next/next/no-img-element */
import { ImagesIcon, PlusIcon } from 'lucide-react'; import {ImagesIcon, PlusIcon} from 'lucide-react';
import { useState } from 'react'; import {useState} from 'react';
import Lightbox from 'yet-another-react-lightbox'; import Lightbox from 'yet-another-react-lightbox';
import 'yet-another-react-lightbox/styles.css'; import 'yet-another-react-lightbox/styles.css';
import { Message } from './ChatWindow'; import {Message} from './ChatWindow';
type Image = { type Image = {
url: string; url: string;

View file

@ -1,9 +1,9 @@
/* eslint-disable @next/next/no-img-element */ /* eslint-disable @next/next/no-img-element */
import { PlayCircle, PlayIcon, PlusIcon, VideoIcon } from 'lucide-react'; import {PlayCircle, PlusIcon, VideoIcon} from 'lucide-react';
import { useState } from 'react'; import {useState} from 'react';
import Lightbox, { GenericSlide, VideoSlide } from 'yet-another-react-lightbox'; import Lightbox, {GenericSlide, VideoSlide} from 'yet-another-react-lightbox';
import 'yet-another-react-lightbox/styles.css'; import 'yet-another-react-lightbox/styles.css';
import { Message } from './ChatWindow'; import {Message} from './ChatWindow';
type Video = { type Video = {
url: string; url: string;

View file

@ -1,12 +1,11 @@
'use client'; 'use client';
import { cn } from '@/lib/utils'; import { cn } from '@/lib/utils';
import { BookOpenText, Home, Search, SquarePen, Settings } from 'lucide-react'; import { BookOpenText, Home, Search, Settings, SquarePen } from 'lucide-react';
import Link from 'next/link'; import Link from 'next/link';
import { useSelectedLayoutSegments } from 'next/navigation'; import { useSelectedLayoutSegments } from 'next/navigation';
import React, { Fragment, useState } from 'react'; import React, { useState } from 'react';
import Layout from './Layout'; import Layout from './Layout';
import { Dialog, Transition } from '@headlessui/react';
import SettingsDialog from './SettingsDialog'; import SettingsDialog from './SettingsDialog';
const Sidebar = ({ children }: { children: React.ReactNode }) => { const Sidebar = ({ children }: { children: React.ReactNode }) => {

View file

@ -1,12 +1,12 @@
/** @type {import('next').NextConfig} */ /** @type {import('next').NextConfig} */
const nextConfig = { const nextConfig = {
images: { images: {
remotePatterns: [ remotePatterns: [
{ {
hostname: 's2.googleusercontent.com', hostname: 's2.googleusercontent.com',
}, },
], ],
}, },
}; };
export default nextConfig; export default nextConfig;

View file

@ -1 +1,6 @@
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 394 80"><path fill="#000" d="M262 0h68.5v12.7h-27.2v66.6h-13.6V12.7H262V0ZM149 0v12.7H94v20.4h44.3v12.6H94v21h55v12.6H80.5V0h68.7zm34.3 0h-17.8l63.8 79.4h17.9l-32-39.7 32-39.6h-17.9l-23 28.6-23-28.6zm18.3 56.7-9-11-27.1 33.7h17.8l18.3-22.7z"/><path fill="#000" d="M81 79.3 17 0H0v79.3h13.6V17l50.2 62.3H81Zm252.6-.4c-1 0-1.8-.4-2.5-1s-1.1-1.6-1.1-2.6.3-1.8 1-2.5 1.6-1 2.6-1 1.8.3 2.5 1a3.4 3.4 0 0 1 .6 4.3 3.7 3.7 0 0 1-3 1.8zm23.2-33.5h6v23.3c0 2.1-.4 4-1.3 5.5a9.1 9.1 0 0 1-3.8 3.5c-1.6.8-3.5 1.3-5.7 1.3-2 0-3.7-.4-5.3-1s-2.8-1.8-3.7-3.2c-.9-1.3-1.4-3-1.4-5h6c.1.8.3 1.6.7 2.2s1 1.2 1.6 1.5c.7.4 1.5.5 2.4.5 1 0 1.8-.2 2.4-.6a4 4 0 0 0 1.6-1.8c.3-.8.5-1.8.5-3V45.5zm30.9 9.1a4.4 4.4 0 0 0-2-3.3 7.5 7.5 0 0 0-4.3-1.1c-1.3 0-2.4.2-3.3.5-.9.4-1.6 1-2 1.6a3.5 3.5 0 0 0-.3 4c.3.5.7.9 1.3 1.2l1.8 1 2 .5 3.2.8c1.3.3 2.5.7 3.7 1.2a13 13 0 0 1 3.2 1.8 8.1 8.1 0 0 1 3 6.5c0 2-.5 3.7-1.5 5.1a10 10 0 0 1-4.4 3.5c-1.8.8-4.1 1.2-6.8 1.2-2.6 0-4.9-.4-6.8-1.2-2-.8-3.4-2-4.5-3.5a10 10 0 0 1-1.7-5.6h6a5 5 0 0 0 3.5 4.6c1 .4 2.2.6 3.4.6 1.3 0 2.5-.2 3.5-.6 1-.4 1.8-1 2.4-1.7a4 4 0 0 0 .8-2.4c0-.9-.2-1.6-.7-2.2a11 11 0 0 0-2.1-1.4l-3.2-1-3.8-1c-2.8-.7-5-1.7-6.6-3.2a7.2 7.2 0 0 1-2.4-5.7 8 8 0 0 1 1.7-5 10 10 0 0 1 4.3-3.5c2-.8 4-1.2 6.4-1.2 2.3 0 4.4.4 6.2 1.2 1.8.8 3.2 2 4.3 3.4 1 1.4 1.5 3 1.5 5h-5.8z"/></svg> <svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 394 80">
<path fill="#000"
d="M262 0h68.5v12.7h-27.2v66.6h-13.6V12.7H262V0ZM149 0v12.7H94v20.4h44.3v12.6H94v21h55v12.6H80.5V0h68.7zm34.3 0h-17.8l63.8 79.4h17.9l-32-39.7 32-39.6h-17.9l-23 28.6-23-28.6zm18.3 56.7-9-11-27.1 33.7h17.8l18.3-22.7z"/>
<path fill="#000"
d="M81 79.3 17 0H0v79.3h13.6V17l50.2 62.3H81Zm252.6-.4c-1 0-1.8-.4-2.5-1s-1.1-1.6-1.1-2.6.3-1.8 1-2.5 1.6-1 2.6-1 1.8.3 2.5 1a3.4 3.4 0 0 1 .6 4.3 3.7 3.7 0 0 1-3 1.8zm23.2-33.5h6v23.3c0 2.1-.4 4-1.3 5.5a9.1 9.1 0 0 1-3.8 3.5c-1.6.8-3.5 1.3-5.7 1.3-2 0-3.7-.4-5.3-1s-2.8-1.8-3.7-3.2c-.9-1.3-1.4-3-1.4-5h6c.1.8.3 1.6.7 2.2s1 1.2 1.6 1.5c.7.4 1.5.5 2.4.5 1 0 1.8-.2 2.4-.6a4 4 0 0 0 1.6-1.8c.3-.8.5-1.8.5-3V45.5zm30.9 9.1a4.4 4.4 0 0 0-2-3.3 7.5 7.5 0 0 0-4.3-1.1c-1.3 0-2.4.2-3.3.5-.9.4-1.6 1-2 1.6a3.5 3.5 0 0 0-.3 4c.3.5.7.9 1.3 1.2l1.8 1 2 .5 3.2.8c1.3.3 2.5.7 3.7 1.2a13 13 0 0 1 3.2 1.8 8.1 8.1 0 0 1 3 6.5c0 2-.5 3.7-1.5 5.1a10 10 0 0 1-4.4 3.5c-1.8.8-4.1 1.2-6.8 1.2-2.6 0-4.9-.4-6.8-1.2-2-.8-3.4-2-4.5-3.5a10 10 0 0 1-1.7-5.6h6a5 5 0 0 0 3.5 4.6c1 .4 2.2.6 3.4.6 1.3 0 2.5-.2 3.5-.6 1-.4 1.8-1 2.4-1.7a4 4 0 0 0 .8-2.4c0-.9-.2-1.6-.7-2.2a11 11 0 0 0-2.1-1.4l-3.2-1-3.8-1c-2.8-.7-5-1.7-6.6-3.2a7.2 7.2 0 0 1-2.4-5.7 8 8 0 0 1 1.7-5 10 10 0 0 1 4.3-3.5c2-.8 4-1.2 6.4-1.2 2.3 0 4.4.4 6.2 1.2 1.8.8 3.2 2 4.3 3.4 1 1.4 1.5 3 1.5 5h-5.8z"/>
</svg>

Before

Width:  |  Height:  |  Size: 1.3 KiB

After

Width:  |  Height:  |  Size: 1.4 KiB

Before After
Before After

View file

@ -1 +1,4 @@
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 283 64"><path fill="black" d="M141 16c-11 0-19 7-19 18s9 18 20 18c7 0 13-3 16-7l-7-5c-2 3-6 4-9 4-5 0-9-3-10-7h28v-3c0-11-8-18-19-18zm-9 15c1-4 4-7 9-7s8 3 9 7h-18zm117-15c-11 0-19 7-19 18s9 18 20 18c6 0 12-3 16-7l-8-5c-2 3-5 4-8 4-5 0-9-3-11-7h28l1-3c0-11-8-18-19-18zm-10 15c2-4 5-7 10-7s8 3 9 7h-19zm-39 3c0 6 4 10 10 10 4 0 7-2 9-5l8 5c-3 5-9 8-17 8-11 0-19-7-19-18s8-18 19-18c8 0 14 3 17 8l-8 5c-2-3-5-5-9-5-6 0-10 4-10 10zm83-29v46h-9V5h9zM37 0l37 64H0L37 0zm92 5-27 48L74 5h10l18 30 17-30h10zm59 12v10l-3-1c-6 0-10 4-10 10v15h-9V17h9v9c0-5 6-9 13-9z"/></svg> <svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 283 64">
<path fill="black"
d="M141 16c-11 0-19 7-19 18s9 18 20 18c7 0 13-3 16-7l-7-5c-2 3-6 4-9 4-5 0-9-3-10-7h28v-3c0-11-8-18-19-18zm-9 15c1-4 4-7 9-7s8 3 9 7h-18zm117-15c-11 0-19 7-19 18s9 18 20 18c6 0 12-3 16-7l-8-5c-2 3-5 4-8 4-5 0-9-3-11-7h28l1-3c0-11-8-18-19-18zm-10 15c2-4 5-7 10-7s8 3 9 7h-19zm-39 3c0 6 4 10 10 10 4 0 7-2 9-5l8 5c-3 5-9 8-17 8-11 0-19-7-19-18s8-18 19-18c8 0 14 3 17 8l-8 5c-2-3-5-5-9-5-6 0-10 4-10 10zm83-29v46h-9V5h9zM37 0l37 64H0L37 0zm92 5-27 48L74 5h10l18 30 17-30h10zm59 12v10l-3-1c-6 0-10 4-10 10v15h-9V17h9v9c0-5 6-9 13-9z"/>
</svg>

Before

Width:  |  Height:  |  Size: 629 B

After

Width:  |  Height:  |  Size: 645 B

Before After
Before After

View file

@ -1,6 +1,10 @@
{ {
"compilerOptions": { "compilerOptions": {
"lib": ["dom", "dom.iterable", "esnext"], "lib": [
"dom",
"dom.iterable",
"esnext"
],
"allowJs": true, "allowJs": true,
"skipLibCheck": true, "skipLibCheck": true,
"strict": true, "strict": true,
@ -18,9 +22,18 @@
} }
], ],
"paths": { "paths": {
"@/*": ["./*"] "@/*": [
"./*"
]
} }
}, },
"include": ["next-env.d.ts", "**/*.ts", "**/*.tsx", ".next/types/**/*.ts"], "include": [
"exclude": ["node_modules"] "next-env.d.ts",
"**/*.ts",
"**/*.tsx",
".next/types/**/*.ts"
],
"exclude": [
"node_modules"
]
} }