Forwarded from HN Best Comments
Re: Animated Drawings
Hey! That’s my project!
Code and dataset are here: https://github.com/facebookresearch/AnimatedDrawings
And a browser-based version of it is here:http://sketch.metademolab.com/
hjessmith, 18 hours ago
Hey! That’s my project!
Code and dataset are here: https://github.com/facebookresearch/AnimatedDrawings
And a browser-based version of it is here:http://sketch.metademolab.com/
hjessmith, 18 hours ago
GitHub
GitHub - facebookresearch/AnimatedDrawings: Code to accompany "A Method for Animating Children's Drawings of the Human Figure"
Code to accompany "A Method for Animating Children's Drawings of the Human Figure" - facebookresearch/AnimatedDrawings
Forwarded from Технологический Болт Генона
- Easier wayback to Internet Archive, archive.today, IPFS and Telegraph integration
- Interactive with IRC, Matrix, Telegram bot, Discord bot, Mastodon, and Twitter as a daemon service for convenient use
- Supports publishing wayback results to Telegram channel, Mastodon, and GitHub Issues for sharing
. . .
wayback --ia --is -d telegram -t your-telegram-bot-token
Балдёж какой
A self-hosted archiving service integrated with Internet Archive, archive.today, IPFS and beyond.
https://github.com/wabarc/wayback
Forwarded from HN Best Comments
Re: MiniGPT-4
On a technical level, they're doing something really simple -- take BLIP2's ViT-L+Q-former, connect it to Vicuna-13B with a linear layer, and train just the tiny layer on some datasets of image-text pairs.
But the results are pretty amazing. It completely knocks Openflamingo && even the original blip2 models out of the park. And best of all, it arrived before OpenAI's GPT-4 Image Modality did. Real win for Open Source AI.
The repo's default inference code is kind of bad -- vicuna is loaded in fp16 so it can't fit on any consumer hardware. I created a PR on the repo to load it with int8, so hopefully by tomorrow it'll be runnable by 3090/4090 users.
I also developed a toy discord bot (https://github.com/152334H/MiniGPT-4-discord-bot) to show the model to some people, but inference is very slow so I doubt I'll be hosting it publicly.
152334H, 6 hours ago
On a technical level, they're doing something really simple -- take BLIP2's ViT-L+Q-former, connect it to Vicuna-13B with a linear layer, and train just the tiny layer on some datasets of image-text pairs.
But the results are pretty amazing. It completely knocks Openflamingo && even the original blip2 models out of the park. And best of all, it arrived before OpenAI's GPT-4 Image Modality did. Real win for Open Source AI.
The repo's default inference code is kind of bad -- vicuna is loaded in fp16 so it can't fit on any consumer hardware. I created a PR on the repo to load it with int8, so hopefully by tomorrow it'll be runnable by 3090/4090 users.
I also developed a toy discord bot (https://github.com/152334H/MiniGPT-4-discord-bot) to show the model to some people, but inference is very slow so I doubt I'll be hosting it publicly.
152334H, 6 hours ago
GitHub
GitHub - 152334H/MiniGPT-4-discord-bot: A true multimodal LLaMA derivative -- on Discord!
A true multimodal LLaMA derivative -- on Discord! Contribute to 152334H/MiniGPT-4-discord-bot development by creating an account on GitHub.
Forwarded from HN Best Comments
Re: RedPajama: Reproduction of LLaMA with friendly lic...
I'm very glad people are starting to push back against claims of various LLMs being open source. I was beginning to be worried that the term would be forcefully redefined in the ML space to mean "weights available." With the kickoff of projects like this and Databricks' Dolly, I'm heartened to see the community saying "no, we are willing to spend the compute to make actually open models."
(While it's true that the actual model code of Llama is properly open source, it's also useless for inference by itself. Claiming these models are open source seems like having your cake and eating it too - you get accolades for "open sourcing" but still get to control what happens with it.)
thrtythreeforty, 14 hours ago
I'm very glad people are starting to push back against claims of various LLMs being open source. I was beginning to be worried that the term would be forcefully redefined in the ML space to mean "weights available." With the kickoff of projects like this and Databricks' Dolly, I'm heartened to see the community saying "no, we are willing to spend the compute to make actually open models."
(While it's true that the actual model code of Llama is properly open source, it's also useless for inference by itself. Claiming these models are open source seems like having your cake and eating it too - you get accolades for "open sourcing" but still get to control what happens with it.)
thrtythreeforty, 14 hours ago
Forwarded from gonzo-обзоры ML статей
Stability AI just released initial set of StableLM-alpha models, with 3B and 7B parameters. 15B and 30B models are on the way.
Base models are released under CC BY-SA-4.0.
StableLM-Alpha models are trained on the new dataset that build on The Pile, which contains 1.5 trillion tokens, roughly 3x the size of The Pile. These models will be trained on up to 1.5 trillion tokens. The context length for these models is 4096 tokens.
As a proof-of-concept, we also fine-tuned the model with Stanford Alpaca's procedure using a combination of five recent datasets for conversational agents: Stanford's Alpaca, Nomic-AI's gpt4all, RyokoAI's ShareGPT52K datasets, Databricks labs' Dolly, and Anthropic's HH. We will be releasing these models as StableLM-Tuned-Alpha.
https://github.com/Stability-AI/StableLM
Base models are released under CC BY-SA-4.0.
StableLM-Alpha models are trained on the new dataset that build on The Pile, which contains 1.5 trillion tokens, roughly 3x the size of The Pile. These models will be trained on up to 1.5 trillion tokens. The context length for these models is 4096 tokens.
As a proof-of-concept, we also fine-tuned the model with Stanford Alpaca's procedure using a combination of five recent datasets for conversational agents: Stanford's Alpaca, Nomic-AI's gpt4all, RyokoAI's ShareGPT52K datasets, Databricks labs' Dolly, and Anthropic's HH. We will be releasing these models as StableLM-Tuned-Alpha.
https://github.com/Stability-AI/StableLM
GitHub
GitHub - Stability-AI/StableLM: StableLM: Stability AI Language Models
StableLM: Stability AI Language Models. Contribute to Stability-AI/StableLM development by creating an account on GitHub.
Forwarded from Записки админа
This media is not supported in your browser
VIEW IN TELEGRAM
💻 Slides - занятная утилита для презентаций. Готовим специально отформатированный в markdown файл и получаем презентацию на основе этого файла прямо в терминале...
https://github.com/maaslalani/slides
А ещё, презентацию можно захостить и сделать доступной по SSH. Не представляю зачем это вам, но всё равно...
https://github.com/maaslalani/slides#ssh
#slides #cli #tui
https://github.com/maaslalani/slides
А ещё, презентацию можно захостить и сделать доступной по SSH. Не представляю зачем это вам, но всё равно...
https://github.com/maaslalani/slides#ssh
#slides #cli #tui
Forwarded from Записки админа
This media is not supported in your browser
VIEW IN TELEGRAM
🧾 Круто выглядящий TUI переводчик, поддерживающий несколько систем перевода - Google, Apertium, Argos, Bing, ChatGPT, Reverso.
https://github.com/eeeXun/gtt
#tui #translate #будничное
https://github.com/eeeXun/gtt
#tui #translate #будничное
Forwarded from Записки админа
🔧 И тут вот ещё интересный вариант для создания бекапов с помощью rsync'а: https://github.com/laurent22/rsync-time-backup
#фидбечат #backup #rsync
#фидбечат #backup #rsync
Forwarded from Записки админа
📦 Python порт утилиты для бекапов, которая выполняет резервное копирование с помощью rsync...
https://github.com/basnijholt/rsync-time-machine.py
#backup #rsync #будничное
https://github.com/basnijholt/rsync-time-machine.py
#backup #rsync #будничное
Forwarded from EXL
Media is too big
VIEW IN TELEGRAM
Зацените графоний, который доселе никогда не отображался на дисплеях наших слабеньких P2K-моторолок!
Я портировал некоторые интересные 3D-движки и технодемки Yeti3D с Nintendo Game Boy Advance и хотя частота процессора Neptune LTE (ARM7TDMI) @ 52 MHz и видеочип ATI Imageon W22xx в телефоне не позволяют сильно разгуляться, кое-что всё-таки удалось выжать.
Те, у кого по странному стечению обстоятельств почему-то всё ещё нет телефона Motorola на платформе P2K могут потыкать эти технодемки 3D-движков онлайн, прямо в Web-браузере:
1. https://lab.exlmoto.ru/y3d/
2. https://lab.exlmoto.ru/y3do/
Более длинные видеоролики с замером FPS:
1. https://www.youtube.com/watch?v=HqgMxK00QFg
2. https://www.youtube.com/watch?v=qHC2QYrFZlk
Исходники проектов, если кому интересны быстрые целочисленные 3D-движки:
1. https://github.com/EXL/P2kElfs/tree/master/Yeti3D
2. https://github.com/EXL/P2kElfs/tree/master/Yeti3D-Old
Я портировал некоторые интересные 3D-движки и технодемки Yeti3D с Nintendo Game Boy Advance и хотя частота процессора Neptune LTE (ARM7TDMI) @ 52 MHz и видеочип ATI Imageon W22xx в телефоне не позволяют сильно разгуляться, кое-что всё-таки удалось выжать.
Те, у кого по странному стечению обстоятельств почему-то всё ещё нет телефона Motorola на платформе P2K могут потыкать эти технодемки 3D-движков онлайн, прямо в Web-браузере:
1. https://lab.exlmoto.ru/y3d/
2. https://lab.exlmoto.ru/y3do/
Более длинные видеоролики с замером FPS:
1. https://www.youtube.com/watch?v=HqgMxK00QFg
2. https://www.youtube.com/watch?v=qHC2QYrFZlk
Исходники проектов, если кому интересны быстрые целочисленные 3D-движки:
1. https://github.com/EXL/P2kElfs/tree/master/Yeti3D
2. https://github.com/EXL/P2kElfs/tree/master/Yeti3D-Old
Forwarded from HN Best Comments
Re: The Sourdough Framework
I thought this was going to be yet another javascript frontend framework with yet another less than descriptive name, but lo and behold this is actually about sourdough. Neat!
EDIT: Appears the author has one for Pizza Dough[0]. Gotta try that one out as it's more applicable to me than sourdough.
[0] https://github.com/hendricius/pizza-dough
junon, 1 day ago
I thought this was going to be yet another javascript frontend framework with yet another less than descriptive name, but lo and behold this is actually about sourdough. Neat!
EDIT: Appears the author has one for Pizza Dough[0]. Gotta try that one out as it's more applicable to me than sourdough.
[0] https://github.com/hendricius/pizza-dough
junon, 1 day ago
GitHub
GitHub - hendricius/pizza-dough: This recipe is dedicated to helping you make the best possible pizza dough for Neapolitan pizza.
This recipe is dedicated to helping you make the best possible pizza dough for Neapolitan pizza. - hendricius/pizza-dough
Боженька макаронный, несколько лет назад искал подобные аппки на 4pda и всяких профильных ресурсах - безуспешно; а вот утром увидел пост с первой страницы пикабу, сука.
Выросла видать актуальность и востребованность: если вчера такие функции смартфона были нужны в основном кладменам и пацанам на квесте, то сегодня - эээ, всем?
Короче, триггер \ вайпер данных на андроидах по x неправильных вводов пина, триггер \ вайпер по вводу второго пина, триггер по попыткам влезть в трубу по usb , триггер действий по сообщению.
https://github.com/x13a/Wasted
https://github.com/x13a/Sentry
https://github.com/x13a/Duress
Спасибо автору x13a, видно неравнодушный человек)
Выросла видать актуальность и востребованность: если вчера такие функции смартфона были нужны в основном кладменам и пацанам на квесте, то сегодня - эээ, всем?
Короче, триггер \ вайпер данных на андроидах по x неправильных вводов пина, триггер \ вайпер по вводу второго пина, триггер по попыткам влезть в трубу по usb , триггер действий по сообщению.
https://github.com/x13a/Wasted
https://github.com/x13a/Sentry
https://github.com/x13a/Duress
Спасибо автору x13a, видно неравнодушный человек)