Forwarded from Open Source & AI Future 🇺🇦
Code App
Перенесення редагування коду, подібного до настільного комп'ютера, на iPad. Додаток також доступний у App Store та TestFlight.
#Swift
Перенесення редагування коду, подібного до настільного комп'ютера, на iPad. Додаток також доступний у App Store та TestFlight.
#Swift
Forwarded from Технологический Болт Генона
This media is not supported in your browser
VIEW IN TELEGRAM
CLI tool for Kubernetes that provides pretty monitoring for Nodes, Pods, Containers, and PVCs resources on the terminal through Prometheus metrics
https://github.com/eslam-gomaa/kptop
https://github.com/eslam-gomaa/kptop
Forwarded from alsenna canton
GitHub
GitHub - rbaron/catprinter: 🐱🖨
🐱🖨. Contribute to rbaron/catprinter development by creating an account on GitHub.
Forwarded from HN Best Comments
Re: Animated Drawings
Hey! That’s my project!
Code and dataset are here: https://github.com/facebookresearch/AnimatedDrawings
And a browser-based version of it is here:http://sketch.metademolab.com/
hjessmith, 18 hours ago
Hey! That’s my project!
Code and dataset are here: https://github.com/facebookresearch/AnimatedDrawings
And a browser-based version of it is here:http://sketch.metademolab.com/
hjessmith, 18 hours ago
GitHub
GitHub - facebookresearch/AnimatedDrawings: Code to accompany "A Method for Animating Children's Drawings of the Human Figure"
Code to accompany "A Method for Animating Children's Drawings of the Human Figure" - facebookresearch/AnimatedDrawings
Forwarded from Технологический Болт Генона
- Easier wayback to Internet Archive, archive.today, IPFS and Telegraph integration
- Interactive with IRC, Matrix, Telegram bot, Discord bot, Mastodon, and Twitter as a daemon service for convenient use
- Supports publishing wayback results to Telegram channel, Mastodon, and GitHub Issues for sharing
. . .
wayback --ia --is -d telegram -t your-telegram-bot-token
Балдёж какой
A self-hosted archiving service integrated with Internet Archive, archive.today, IPFS and beyond.
https://github.com/wabarc/wayback
Forwarded from HN Best Comments
Re: MiniGPT-4
On a technical level, they're doing something really simple -- take BLIP2's ViT-L+Q-former, connect it to Vicuna-13B with a linear layer, and train just the tiny layer on some datasets of image-text pairs.
But the results are pretty amazing. It completely knocks Openflamingo && even the original blip2 models out of the park. And best of all, it arrived before OpenAI's GPT-4 Image Modality did. Real win for Open Source AI.
The repo's default inference code is kind of bad -- vicuna is loaded in fp16 so it can't fit on any consumer hardware. I created a PR on the repo to load it with int8, so hopefully by tomorrow it'll be runnable by 3090/4090 users.
I also developed a toy discord bot (https://github.com/152334H/MiniGPT-4-discord-bot) to show the model to some people, but inference is very slow so I doubt I'll be hosting it publicly.
152334H, 6 hours ago
On a technical level, they're doing something really simple -- take BLIP2's ViT-L+Q-former, connect it to Vicuna-13B with a linear layer, and train just the tiny layer on some datasets of image-text pairs.
But the results are pretty amazing. It completely knocks Openflamingo && even the original blip2 models out of the park. And best of all, it arrived before OpenAI's GPT-4 Image Modality did. Real win for Open Source AI.
The repo's default inference code is kind of bad -- vicuna is loaded in fp16 so it can't fit on any consumer hardware. I created a PR on the repo to load it with int8, so hopefully by tomorrow it'll be runnable by 3090/4090 users.
I also developed a toy discord bot (https://github.com/152334H/MiniGPT-4-discord-bot) to show the model to some people, but inference is very slow so I doubt I'll be hosting it publicly.
152334H, 6 hours ago
GitHub
GitHub - 152334H/MiniGPT-4-discord-bot: A true multimodal LLaMA derivative -- on Discord!
A true multimodal LLaMA derivative -- on Discord! Contribute to 152334H/MiniGPT-4-discord-bot development by creating an account on GitHub.
Forwarded from HN Best Comments
Re: RedPajama: Reproduction of LLaMA with friendly lic...
I'm very glad people are starting to push back against claims of various LLMs being open source. I was beginning to be worried that the term would be forcefully redefined in the ML space to mean "weights available." With the kickoff of projects like this and Databricks' Dolly, I'm heartened to see the community saying "no, we are willing to spend the compute to make actually open models."
(While it's true that the actual model code of Llama is properly open source, it's also useless for inference by itself. Claiming these models are open source seems like having your cake and eating it too - you get accolades for "open sourcing" but still get to control what happens with it.)
thrtythreeforty, 14 hours ago
I'm very glad people are starting to push back against claims of various LLMs being open source. I was beginning to be worried that the term would be forcefully redefined in the ML space to mean "weights available." With the kickoff of projects like this and Databricks' Dolly, I'm heartened to see the community saying "no, we are willing to spend the compute to make actually open models."
(While it's true that the actual model code of Llama is properly open source, it's also useless for inference by itself. Claiming these models are open source seems like having your cake and eating it too - you get accolades for "open sourcing" but still get to control what happens with it.)
thrtythreeforty, 14 hours ago
Forwarded from gonzo-обзоры ML статей
Stability AI just released initial set of StableLM-alpha models, with 3B and 7B parameters. 15B and 30B models are on the way.
Base models are released under CC BY-SA-4.0.
StableLM-Alpha models are trained on the new dataset that build on The Pile, which contains 1.5 trillion tokens, roughly 3x the size of The Pile. These models will be trained on up to 1.5 trillion tokens. The context length for these models is 4096 tokens.
As a proof-of-concept, we also fine-tuned the model with Stanford Alpaca's procedure using a combination of five recent datasets for conversational agents: Stanford's Alpaca, Nomic-AI's gpt4all, RyokoAI's ShareGPT52K datasets, Databricks labs' Dolly, and Anthropic's HH. We will be releasing these models as StableLM-Tuned-Alpha.
https://github.com/Stability-AI/StableLM
Base models are released under CC BY-SA-4.0.
StableLM-Alpha models are trained on the new dataset that build on The Pile, which contains 1.5 trillion tokens, roughly 3x the size of The Pile. These models will be trained on up to 1.5 trillion tokens. The context length for these models is 4096 tokens.
As a proof-of-concept, we also fine-tuned the model with Stanford Alpaca's procedure using a combination of five recent datasets for conversational agents: Stanford's Alpaca, Nomic-AI's gpt4all, RyokoAI's ShareGPT52K datasets, Databricks labs' Dolly, and Anthropic's HH. We will be releasing these models as StableLM-Tuned-Alpha.
https://github.com/Stability-AI/StableLM
GitHub
GitHub - Stability-AI/StableLM: StableLM: Stability AI Language Models
StableLM: Stability AI Language Models. Contribute to Stability-AI/StableLM development by creating an account on GitHub.
Forwarded from Записки админа
This media is not supported in your browser
VIEW IN TELEGRAM
💻 Slides - занятная утилита для презентаций. Готовим специально отформатированный в markdown файл и получаем презентацию на основе этого файла прямо в терминале...
https://github.com/maaslalani/slides
А ещё, презентацию можно захостить и сделать доступной по SSH. Не представляю зачем это вам, но всё равно...
https://github.com/maaslalani/slides#ssh
#slides #cli #tui
https://github.com/maaslalani/slides
А ещё, презентацию можно захостить и сделать доступной по SSH. Не представляю зачем это вам, но всё равно...
https://github.com/maaslalani/slides#ssh
#slides #cli #tui
Forwarded from Записки админа
This media is not supported in your browser
VIEW IN TELEGRAM
🧾 Круто выглядящий TUI переводчик, поддерживающий несколько систем перевода - Google, Apertium, Argos, Bing, ChatGPT, Reverso.
https://github.com/eeeXun/gtt
#tui #translate #будничное
https://github.com/eeeXun/gtt
#tui #translate #будничное
Forwarded from Записки админа
🔧 И тут вот ещё интересный вариант для создания бекапов с помощью rsync'а: https://github.com/laurent22/rsync-time-backup
#фидбечат #backup #rsync
#фидбечат #backup #rsync