Data Science by ODS.ai 🦜
51K subscribers
363 photos
34 videos
7 files
1.52K links
First Telegram Data Science channel. Covering all technical and popular staff about anything related to Data Science: AI, Big Data, Machine Learning, Statistics, general Math and the applications of former. To reach editors contact: @haarrp
Download Telegram
​​GPT-3: Language Models are Few-Shot Learners

#openAI train GPT-3, an autoregressive language model with 175 billion parameters, 10x more than any previous non-sparse language model, and test its performance in the few-shot setting
Their model applied without any gradient updates or fine-tuning, with tasks and few-shot demonstrations specified purely via text interaction with the model.

Achieves strong performance on many NLP datasets, including translation, q&a, and cloze tasks, as well as several tasks that require on-the-fly reasoning or domain adaptation, such as unscrambling words, using a novel word in a sentence, or performing 3-digit arithmetic.

Also, they find that GPT-3 can generate samples of news articles in which human evaluators have difficulty distinguishing from articles written by humans.

175 billion parameters! And on some tasks, it is not performed
It is all you need to know about


paper: https://arxiv.org/abs/2005.14165.pdf

#nlp #gpt #gpt3 #language #model
​​GPT-3 application for website form generation

Turns out #GPT3 model is capable of generating #JSX code (which is HTML layout for #React ) given the description of the required blocks to generate.

Author reports that there are exceptions, given current output limit of the model of 512 tokens.

Why this is important: one might suppose that in the future programmers will just write specifications and tests for the AI to generate the code. Given the speed of progress that won’t be surprising at all.

And probably the more sophisticated models will be capable of using hard output limit to produce a code for the output generation but that obviously is still an area for active research.

More realistic evaluation is that the upcoming code generation tools is that it will just allow more people to build products, following #nocode movement.

Twitter thread: https://twitter.com/sharifshameem/status/1282676454690451457

#codegeneration #NLU
​​how gpt3 works. a visual thread

short thread with cool animations how gpt-3 works by jay alammar

collected twitter thread: https://threader.app/thread/1285498971960598529


#nlp #transformers #gpt3 #jayalammar
#GPT3 attracted lots of attention. Let’s try new format of discussing the matter in the comments, provided by peerboard.

For accessing the comments, just click the link below ⬇️⬇️⬇️, authorize with the telegram and follow the discussion.
This media is not supported in your browser
VIEW IN TELEGRAM
Applying GPT-3 to generate neural network code

Matt Shumer used GPT-3 to generate code for a machine learning model, just by describing the dataset and required output.

#GPT3 #inception #codegeneration #NLU #NLP
english to regex

generating regex by just describing it and providing an example (apparently powered by gpt-3)


web page: https://losslesshq.com

#regext #gpt3
​​Philosopher AI β€” website to generate text with #GPT3

Tool to generate text on different topics. Sensible topics such as sex, religion or even nationality are blocked.

Great way to spread the awareness on #ai and to show nontechnical friends that #Skynet is not a problem to be concerned with yet.

Website: https://philosopherai.com/philosopher/humanity-on-mars-73ac00

#nlu #nlp
​​πŸ”₯New breakthrough on text2image generation by #OpenAI

DALLΒ·E: Creating Images from Text

This architecture is capable of understanding style descriptions as well as complex relationship between objects in context.

That opens whole new perspective for digital agencies, potentially threatening stock photo sites and new opportunies for regulations and lawers to work on.

Interesting times!

Website: https://openai.com/blog/dall-e/

#GAN #GPT3 #openai #dalle #DL
​​Summarizing Books with Human Feedback

#OpenAI fine-tuned #GPT3 to summarize books well enough to be human-readable. Main approach: recursively split text into parts and then meta-summarize summaries.

This is really important because once there will be a great summarization #SOTA we won't need editors to write posts for you. And researchers ultimatively will have some asisstance interpreting models' results.

BlogPost: https://openai.com/blog/summarizing-books/
ArXiV: https://arxiv.org/abs/2109.10862

#summarization #NLU #NLP
​​AI Generated Pokemon Sprites with GPT-2

Author trained #GPT2 model to generate #pokemon sprites, encoding them as the lines of characters (including color). Surprisingly, results were decent, so this leaves us wonder if #GPT3 results would be better.

YouTube: https://www.youtube.com/watch?v=Z9K3cwSL6uM
GitHub: https://github.com/MatthewRayfield/pokemon-gpt-2
Article: https://matthewrayfield.com/articles/ai-generated-pokemon-sprites-with-gpt-2/
Example: https://matthewrayfield.com/projects/ai-pokemon/

#NLU #NLP #generation #neuralart
The Illustrated Retrieval Transformer
by @jayalammar

The latest batch of language models can be much smaller yet achieve GPT-3 like performance by being able to query a database or search the web for information. A key indication is that building larger and larger models is not the only way to improve performance.


http://jalammar.github.io/illustrated-retrieval-transformer/

#nlp #gpt3 #retro #deepmind
🦜 Hi!

We are the first Telegram Data Science channel.


Channel was started as a collection of notable papers, news and releases shared for the members of Open Data Science (ODS) community. Through the years of just keeping the thing going we grew to an independent online Media supporting principles of Free and Open access to the information related to Data Science.


Ultimate Posts

* Where to start learning more about Data Science. https://github.com/open-data-science/ultimate_posts/tree/master/where_to_start
* @opendatascience channel audience research. https://github.com/open-data-science/ods_channel_stats_eda


Open Data Science

ODS.ai is an international community of people anyhow related to Data Science.

Website: https://ods.ai



Hashtags

Through the years we accumulated a big collection of materials, most of them accompanied by hashtags.

#deeplearning #DL β€” post about deep neural networks (> 1 layer)
#cv β€” posts related to Computer Vision. Pictures and videos
#nlp #nlu β€” Natural Language Processing and Natural Language Understanding. Texts and sequences
#audiolearning #speechrecognition β€” related to audio information processing
#ar β€” augmeneted reality related content
#rl β€” Reinforcement Learning (agents, bots and neural networks capable of playing games)
#gan #generation #generatinveart #neuralart β€” about neural artt and image generation
#transformer #vqgan #vae #bert #clip #StyleGAN2 #Unet #resnet #keras #Pytorch #GPT3 #GPT2 β€” related to special architectures or frameworks
#coding #CS β€” content related to software engineering sphere
#OpenAI #microsoft #Github #DeepMind #Yandex #Google #Facebook #huggingface β€” hashtags related to certain companies
#productionml #sota #recommendation #embeddings #selfdriving #dataset #opensource #analytics #statistics #attention #machine #translation #visualization


Chats

- Data Science Chat https://t.me/datascience_chat
- ODS Slack through invite form at website

ODS resources

* Main website: https://ods.ai
* ODS Community Telegram Channel (in Russian): @ods_ru
* ML trainings Telegram Channel: @mltrainings
* ODS Community Twitter: https://twitter.com/ods_ai

Feedback and Contacts

You are welcome to reach administration through telegram bot: @opendatasciencebot
πŸ”₯Out of One, Many: Using Language Models to Simulate Human Samples

TLDR: GPT-3 has unexpected application β€” modelling of socialogical studies. Average responses of a certain groups can be to some algorithmical accuracy predicted by in silico modelling.

What this means: sociologists won’t have to conduct costly live researches and will be able to run experiments in simulations. Marketers and politicians are getting their hands on cheap solution for modelling their slogans and value propositions. This enables people to check more hypothesis faster and to manipulate society with more efficiency.

ArXiV: https://arxiv.org/abs/2209.06899

#gpt3 #psychohistory #nlu #sociology
Please open Telegram to view this post
VIEW IN TELEGRAM
There is a claim that #ChatGPT is capable of writing a code based on a text input

Why does it matter: it potentially can lower the barrier for programmers and allow more tools for efficient software development to emerge.

Source: tweet

#GPT3 #NLU #NLP #codegeneration