ChatGPT能够协助咱们完成许多原本很难完成功用,为传统体系加入AI支撑,然后提高用户体验。本文介绍了怎么给在线Markdown文档体系增加ChatGPT问答支撑,将静态文档改造为智能文档。原文: Build a ChatGPT Powered Markdown Documentation in No Time

10分钟打造基于ChatGPT的Markdown智能文档

今天,咱们将学习怎么构建一个经过ChatGPT来答复关于文档相关问题的体系,该体系将基于OpenAI和Embedbase构建。项目发布在 differentai.gumroad.com/l/chatgpt-d…

概述

10分钟打造基于ChatGPT的Markdown智能文档

咱们在这儿首要讨论:

  1. 需求将内容存储在数据库中。
  2. 需求让用户输入查询。
  3. 在数据库中查找与用户查询最类似的成果(稍后详细介绍)。
  4. 基于匹配查询的前5个最类似成果创立”上下文“并问询ChatGPT:

依据下面的上下文答复问题,假如不能依据上下文答复问题,就说”我不知道”

上下文:
[上下文内容]

问题:
[问题内容]
答复:

完成细节

好,咱们开端。

以下是完本钱体系需求的前提条件。

  • Embedbase API key: 一个能够找到”最类似成果”的数据库。并不是一切数据库都适合这种工作,咱们将运用Embedbase,它能够做到这一点。Embedbase答应咱们找到查找查询和存储内容之间的”语义类似性”。
  • OpenAI API key: 这是ChatGPT部分。
  • Nextra: 并且装置好Node.js

.env中填好Embedbase和OpenAI API key。

OPENAI_API_KEY="<YOUR KEY>"
EMBEDBASE_API_KEY="<YOUR KEY>"

提醒一下,咱们将基于了不得的文档结构Nextra创立由ChatGPT供给支撑的QA文档,该结构答应咱们运用NextJS、tailwindcss和MDX(Markdown + React)编写文档。咱们还将运用Embedbase作为数据库,并调用OpenAI的ChatGPT。

创立Nextra文档

能够在Github上找到官方Nextra文档模板,用模板创立文档之后,能够用任何你喜爱的修改器打开。

# we won't use "pnpm" here, rather the traditional "npm"
rm pnpm-lock.yaml
npm i
npm run dev

现在请访问https://localhost:3000。

尝试修改.mdx文档,看看内容有何改变。

预备并存储文件

第一步需求将文档存储在Embedbase中。不过有一点需求留意,假如咱们在DB中存储相关联的较小的块,效果会更好,因此咱们将把文档按语句分组。让咱们从在文件夹scripts中编写一个名为sync.js的脚本开端。

你需求glob库来列出文件,用命令npm i glob@8.1.0(咱们将运用8.1.0版别)装置glob库。

const glob = require("glob");
const fs = require("fs");
const sync = async () => {
 // 1. read all files under pages/* with .mdx extension
 // for each file, read the content
 const documents = glob.sync("pages/**/*.mdx").map((path) => ({
  // we use as id /{pagename} which could be useful to
  // provide links in the UI
  id: path.replace("pages/", "/").replace("index.mdx", "").replace(".mdx", ""),
  // content of the file
  data: fs.readFileSync(path, "utf-8")
 }));
 // 2. here we split the documents in chunks, you can do it in many different ways, pick the one you prefer
 // split documents into chunks of 100 lines
 const chunks = [];
 documents.forEach((document) => {
  const lines = document.data.split("\n");
  const chunkSize = 100;
  for (let i = 0; i < lines.length; i += chunkSize) {
   const chunk = lines.slice(i, i + chunkSize).join("\n");
    chunks.push({
      data: chunk
   });
  }
 });
}
sync();

现在咱们构建好了存储在DB中的块,接下来扩展脚本,以便将块增加到Embedbase。

要查询Embedbase,需求履行npm i node-fetch@2.6.9装置2.6.9版别的node-fetch。

const fetch = require("node-fetch");
// your Embedbase api key
const apiKey = process.env.EMBEDBASE_API_KEY;
const sync = async () => {
 // ...
 // 3. we then insert the data in Embedbase
 const response = await fetch("https://embedbase-hosted-usx5gpslaq-uc.a.run.app/v1/documentation", { // "documentation" is your dataset ID
  method: "POST",
  headers: {
   "Authorization": "Bearer " + apiKey,
   "Content-Type": "application/json"
  },
  body: JSON.stringify({
   documents: chunks
  })
 });
 const data = await response.json();
 console.log(data);
}
sync();

很好,现在能够运行了:

EMBEDBASE_API_KEY="<YOUR API KEY>" node scripts/sync.js

假如运行杰出,应该看到:

10分钟打造基于ChatGPT的Markdown智能文档

获取用户查询

接下来修正Nextra文档主题,将内置查找栏替换为支撑ChatGPT的查找栏。

theme.config.tsx中增加一个Modal组件,内容如下:

// update the imports
import { DocsThemeConfig, useTheme } from 'nextra-theme-docs'
const Modal = ({ children, open, onClose }) => {
	const theme = useTheme();
	if (!open) return null;
	return (
		<div
		  style={{
		    position: 'fixed',
		    top: 0,
		    left: 0,
		    right: 0,
		    bottom: 0,
		    backgroundColor: 'rgba(0,0,0,0.5)',
		    zIndex: 100,
		   }}
		  onClick={onClose}>			
		  <div
		    style={{
		      position: 'absolute',
		      top: '50%',
		      left: '50%',
		      transform: 'translate(-50%, -50%)',
		      backgroundColor: theme.resolvedTheme === 'dark' ? '#1a1a1a' : 'white',
		      padding: 20,
		      borderRadius: 5,
		      width: '80%',
		      maxWidth: 700,
		      maxHeight: '80%',
		      overflow: 'auto',
		    }}
		    onClick={(e) => e.stopPropagation()}>			
		      {children}
		  </div>
	      </div>
	);
};

现在创立查找栏:

// update the imports
import React, { useState } from 'react'
// we create a Search component
const Search = () => {
  const [open, setOpen] = useState(false);
  const [question, setQuestion] = useState("");
  // ...
  // All the logic that we will see later
  const answerQuestion = () => {  }
  // ...
  return (
    <>
      <input
        placeholder="Ask a question"
	// We open the modal here
	// to let the user ask a question
	onClick={() => setOpen(true)}
	type="text"
      />
      <Modal open={open} onClose={() => setOpen(false)}>
        <form onSubmit={answerQuestion} className="nx-flex nx-gap-3">
	  <input
	    placeholder="Ask a question"
	    type="text"
	    value={question}
            onChange={(e) => setQuestion(e.target.value)}
          />
	  <button type="submit">					
	    Ask
	  </button>
        </form>
      </Modal>
    </>
  );
}

最终,更新配置以设置新创立的查找栏:

const config: DocsThemeConfig = {
	logo: <span>My Project</span>,
	project: {
		link: 'https://github.com/shuding/nextra-docs-template',
	},
	chat: {
		link: 'https://discord.com',
	},
	docsRepositoryBase: 'https://github.com/shuding/nextra-docs-template',
	footer: {
		text: 'Nextra Docs Template',
	},
	// add this to use our Search component
	search: {
		component: <Search />
	}
}
构建上下文

这儿需求OpenAI token计数库tiktoken,履行npm i @dqbd/tiktoken装置。

接下来创立带上下文的ChatGPT提示词。创立文件pages/api/buildPrompt.ts,代码如下:

// pages/api/buildPrompt.ts
import { get_encoding } from "@dqbd/tiktoken";
// Load the tokenizer which is designed to work with the embedding model
const enc = get_encoding('cl100k_base');
const apiKey = process.env.EMBEDBASE_API_KEY;
// this is how you search Embedbase with a string query
const search = async (query: string) => {
 return fetch("https://embedbase-hosted-usx5gpslaq-uc.a.run.app/v1/documentation/search", {
  method: "POST",
  headers: {
   Authorization: "Bearer " + apiKey,
   "Content-Type": "application/json"
  },
  body: JSON.stringify({
   query: query
  })
 }).then(response => response.json());
};
const createContext = async (question: string, maxLen = 1800) => {
 // get the similar data to our query from the database
 const searchResponse = await search(question);
 let curLen = 0;
 const returns = [];
 // We want to add context to some limit of length (tokens)
 // because usually LLM have limited input size
 for (const similarity of searchResponse["similarities"]) {
  const sentence = similarity["data"];
  // count the tokens
  const nTokens = enc.encode(sentence).length;
  // a token is roughly 4 characters, to learn more
  // https://help.openai.com/en/articles/4936856-what-are-tokens-and-how-to-count-them
  curLen += nTokens + 4;
  if (curLen > maxLen) {
   break;
  }
  returns.push(sentence);
 }
 // we join the entries we found with a separator to show it's different
 return returns.join("\n\n###\n\n");
}
// this is the endpoint that returns an answer to the client
export default async function buildPrompt(req, res) {
 const prompt = req.body.prompt;
 const context = await createContext(prompt);
 const newPrompt = `Answer the question based on the context below, and if the question can't be answered based on the context, say "I don't know"\n\nContext: ${context}\n\n---\n\nQuestion: ${prompt}\nAnswer:`;
 res.status(200).json({ prompt: newPrompt });
}
调用ChatGPT

首要,在文件utils/OpenAIStream.ts中增加一些用于对OpenAI进行流调用的函数,履行npm i eventsource-parser装置eventsource-parser。

import {
  createParser,
  ParsedEvent,
  ReconnectInterval,
} from "eventsource-parser";
export interface OpenAIStreamPayload {
 model: string;
 // this is a list of messages to give ChatGPT
 messages: { role: "user"; content: string }[];
 stream: boolean;
}
export async function OpenAIStream(payload: OpenAIStreamPayload) {
 const encoder = new TextEncoder();
 const decoder = new TextDecoder();
 let counter = 0;
 const res = await fetch("https://api.openai.com/v1/chat/completions", {
  headers: {
   "Content-Type": "application/json",
   "Authorization": `Bearer ${process.env.OPENAI_API_KEY ?? ""}`,
  },
  method: "POST",
  body: JSON.stringify(payload),
 });
 const stream = new ReadableStream({
  async start(controller) {
   // callback
   function onParse(event: ParsedEvent | ReconnectInterval) {
    if (event.type === "event") {
     const data = event.data;
     // https://beta.openai.com/docs/api-reference/completions/create#completions/create-stream
     if (data === "[DONE]") {
      controller.close();
      return;
     }
     try {
      const json = JSON.parse(data);
      // get the text response from ChatGPT
      const text = json.choices[0]?.delta?.content;
      if (!text) return;
      if (counter < 2 && (text.match(/\n/) || []).length) {
       // this is a prefix character (i.e., "\n\n"), do nothing
       return;
      }
      const queue = encoder.encode(text);
      controller.enqueue(queue);
      counter++;
     } catch (e) {
      // maybe parse error
      controller.error(e);
     }
    }
   }
   // stream response (SSE) from OpenAI may be fragmented into multiple chunks
   // this ensures we properly read chunks and invoke an event for each SSE event stream
   const parser = createParser(onParse);
   // https://web.dev/streams/#asynchronous-iteration
   for await (const chunk of res.body as any) {
    parser.feed(decoder.decode(chunk));
   }
  },
 });
 return stream;
}

然后创立文件pages/api/qa.ts,作为对ChatGPT进行流调用的端点。

// pages/api/qa.ts
import { OpenAIStream, OpenAIStreamPayload } from "../../utils/OpenAIStream";
export const config = {
  // We are using Vercel edge function for this endpoint
  runtime: "edge",
};
interface RequestPayload {
 prompt: string;
}
const handler = async (req: Request, res: Response): Promise<Response> => {
 const { prompt } = (await req.json()) as RequestPayload;
 if (!prompt) {
  return new Response("No prompt in the request", { status: 400 });
 }
 const payload: OpenAIStreamPayload = {
  model: "gpt-3.5-turbo",
  messages: [{ role: "user", content: prompt }],
  stream: true,
 };
 const stream = await OpenAIStream(payload);
 return new Response(stream);
};
export default handler;
衔接一切并发问

现在是时分经过API调用发问。修改theme.config.tsx,将该函数增加到Search组件中:

// theme.config.tsx
const Search = () => {
	const [open, setOpen] = useState(false);
	const [question, setQuestion] = useState("");
	const [answer, setAnswer] = useState("");
	const answerQuestion = async (e: any) => {
		e.preventDefault();
		setAnswer("");
		// build the contextualized prompt
		const promptResponse = await fetch("/api/buildPrompt", {
			method: "POST",
			headers: {
				"Content-Type": "application/json",
			},
			body: JSON.stringify({
				prompt: question,
			}),
		});
		const promptData = await promptResponse.json();
		// send it to ChatGPT
		const response = await fetch("/api/qa", {
			method: "POST",
			headers: {
				"Content-Type": "application/json",
			},
			body: JSON.stringify({
				prompt: promptData.prompt,
			}),
		});
		if (!response.ok) {
			throw new Error(response.statusText);
		}
		const data = response.body;
		if (!data) {
			return;
		}
		const reader = data.getReader();
		const decoder = new TextDecoder();
		let done = false;
		// read the streaming ChatGPT answer
		while (!done) {
			const { value, done: doneReading } = await reader.read();
			done = doneReading;
			const chunkValue = decoder.decode(value);
			// update our interface with the answer
			setAnswer((prev) => prev + chunkValue);
		}
	};
	return (
		<>
			<input
				placeholder="Ask a question"
				onClick={() => setOpen(true)}
				type="text"
			/>
			<Modal open={open} onClose={() => setOpen(false)}>
				<form onSubmit={answerQuestion} className="nx-flex nx-gap-3">
					<input
						placeholder="Ask a question"
						type="text"
						value={question}
						onChange={(e) => setQuestion(e.target.value)}
					/>
					<button type="submit">
						Ask
					</button>
				</form>
				<p>
					{answer}
				</p>
			</Modal>
		</>
	);
}

你现在应该能看到:

10分钟打造基于ChatGPT的Markdown智能文档

当然,能够随意改善款式。

定论

总结一下,咱们做了:

  • 创立了Nextra文档
  • 在Embedbase中预备和存储文档
  • 构建了获取用户查询的接口
  • 在数据库中查找需求查询ChatGPT的问题上下文
  • 运用此上下文构建提示并调用ChatGPT
  • 经过将一切内容联系起来,让用户发问

感谢阅览本文,Github上有一个创立此类文档的开源模板。

延伸阅览

嵌入(Embedding)是一种机器学习概念,答应咱们将数据的语义数字化,然后创立以下功用:

  • 语义查找(例如,”牛吃草”和”猴子吃香蕉”之间有什么类似之处,也适用于比较图画等)
  • 推荐体系(假如你喜爱电影《阿凡达》,可能也会喜爱《星球大战》)
  • 分类(“这部电影太棒了”是肯定句,”这部电影烂透了”是否定句)
  • 生成式查找(能够答复有关PDF、网站、YouTube视频等问题的谈天机器人)

Embedding并不是一项新技术,但由于OpenAI Embedding端点的快速和廉价,最近变得更受欢迎、更通用、更简略运用。在网上有许多关于Embedding的信息,因此咱们不会深入研究Embedding的技术主题。

AI embedding能够被认为是哈利波特的分院帽。就像分院帽依据学生特质来分配学院相同,AI embedding也是依据特征来分类类似内容。当咱们想找到类似内容时,能够要求AI为咱们供给内容的embedding,计算它们之间的间隔。embedding之间的间隔越近,内容就越类似。这个进程类似于分院帽怎么运用每个学生的特征来确认最适合的学院。经过运用AI embedding,咱们能够依据内容特征快速、轻松的进行比较,然后做出更正确的决定和更有用的查找成果。

10分钟打造基于ChatGPT的Markdown智能文档

上面描绘的方法仅仅简略的嵌入单词,但现在现已能够嵌入语句、图画、语句+图画以及许多其他东西。

假如想在生产环境中运用embedding,有一些陷阱需求当心:

  • 大规模存储embedding的基础设施
  • 本钱优化(例如防止计算两次数据)
  • 用户embedding的阻隔(不期望查找功用显示其他用户的数据)
  • 处理模型输入的大小约束
  • 与流行的使用基础设施(supabase, firebase,谷歌云等)集成
在GitHub Action中继续预备数据

embedding的意义在于能够索引任何类型的非结构化数据,咱们期望每次修正文档时都能被索引,对吧?下面展示的是一个GitHub Action,当主分支完成git push时,将索引每个markdown文件:

# .github/workflows/index.yaml
name: Index documentation
on:
  push:
    branches:
      - main
jobs:
  index:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v2
      - uses: actions/setup-node@v2
        with:
	  node-version: 14
      - run: npm install
      - run: node scripts/sync.js
	env:
	  EMBEDBASE_API_KEY: ${{ secrets.EMBEDBASE_API_KEY }}

别忘了把EMBEDBASE_API_KEY增加到你的GitHub密钥里。


你好,我是俞凡,在Motorola做过研制,现在在Mavenir做技术工作,对通讯、网络、后端架构、云原生、DevOps、CICD、区块链、AI等技术始终保持着稠密的兴趣,平常喜爱阅览、思考,信任继续学习、终身成长,欢迎一同交流学习。
微信公众号:DeepNoMind