DPS Build
720 subscribers
120 photos
3 videos
10 files
462 links
AI, coding, data science and startups
Download Telegram
记录了一下最近使用 Carrd 连接自定义 API 的经历。

Carrd 是由 AJ 一个人创建的建站服务,2016年发布之后,两年就达到了一百万美金的 ARR。

Carrd 非常容易上手,这次我们尝试使用一个 custom form 的功能,遇到了一些困难。中间遇到了一些困难。开了 ticket 之后,AJ 一直跟进,工作日基本上一小时内就有回复,休息日也会在24小时内回复。非常佩服他的响应速度!

https://letters.acacess.com/carrd-with-apis/
1
这几天在学习 langchain 这个工具链,有点像为 llm 开发的 pandas,将上下游各种工具都打通。

听说他们刚刚还融资成功,准备开始产品化。非常快的速度,一切都发生在短短几个月的时间内。

https://github.com/hwchase17/langchain

#open #ml
这个 thread 太好玩了:改编了各大网站的 slogan,比如 Webflow — 祝你能够猜出我们的价格

https://twitter.com/jovvvian/status/1643663422863863809
👍1👏1
macOS 里藏着 Bitcoin 的白皮书,在 Terminal 里输入以下命令即可打开:

open /System/Library/Image\ Capture/Devices/VirtualScanner.app/Contents/Resources/simpledoc.pdf

https://waxy.org/2023/04/the-bitcoin-whitepaper-is-hidden-in-every-modern-copy-of-macos/
👍1
Online services often exhibit data locality, with users frequently accessing popular or trending content. Cache systems take advantage of this behavior by storing commonly accessed data, which in turn reduces data retrieval time, improves response times, and eases the burden on backend servers. Traditional cache systems typically utilize an exact match between a new query and a cached query to determine if the requested content is available in the cache before fetching the data.

However, using an exact match approach for LLM caches is less effective due to the complexity and variability of LLM queries, resulting in a low cache hit rate. To address this issue, GPTCache adopt alternative strategies like semantic caching. Semantic caching identifies and stores similar or related queries, thereby increasing cache hit probability and enhancing overall caching efficiency.

GPTCache employs embedding algorithms to convert queries into embeddings and uses a vector store for similarity search on these embeddings. This process allows GPTCache to identify and retrieve similar or related queries from the cache storage, as illustrated in the Modules section.

https://github.com/zilliztech/gptcache
前几天和朋友聊到微软推出的 visualchat,一看,好家伙,22个模型串起来,至少要四张卡才能跑起来。

https://github.com/microsoft/visual-chatgpt#gpu-memory-usage
一键安装 gpt4all 的方法来了

https://gpt4all.io/index.html
👍1
DPS Build
OpenAI 的 Lilian Li 专门介绍了 prompt engineering https://t.me/tms_ur_way/2655
Chip Huyen 写了一篇非常详细的 当下 LLM 产品化总结:

1. 目前的 prompt engineering 大大降低了开发难度,同时也提高了维护成本;
2. 尽管直接调用各种 LLM API 可能会产生天价费用,但是比起自己从零训练模型还是便宜;
3. LLM 输出可能不稳定,需要通过各种方法来降低这种风险,比如在 prompt 中加入更多的例子;
4. LLM 的 latency 可能也是一个潜在问题
5. 总之,这一领域发展得非常快,也许这些经验中的很大一部分在三个月后就没有参考意义了。

https://readwise.io/reader/shared/01gxwbcx0r0bf7tnbcn1nh1nw4
1👍1