Towards NLP🇺🇦
1.41K subscribers
48 photos
9 files
220 links
All ngrams about Natural Language Processing that are of interest to @iamdddaryna
Download Telegram
IFAN: An Explainability-Focused Interaction Framework for Humans and NLP Models

We talked before about different techniques how to explain ML and NLP models. Ok, we have explained some model for a specific output, highlighted some tokens there. What should happen next?

📌You can use humans to debug and improve your model! What your steps can be:
1. 🔍You identify misclassified samples (for instance, during hate speech detection, you have noticed that the model is biased against some target words).
2. 📊You explain model's decisions and see that the models puts too much or too less weight/attention to some words.
3. 📝You edit the explanation, i.e. corresponding weights of the words spans that should contribute to the correct label.
4. 🔄You do this for several samples and retrain Adapter layer of your model based on new samples.
5. Now your model's behavior is fixed, i.e. it is debiased!

All this can be done with our platform:
https://ifan.ml/

This is the first solid version, we are still developing many-many new features for it (as, for instance, the report page where you can control model's performance change). But already now, we believe that the platform can be a solid step to human-in-the-loop debugging of NLP models🤖.

📜The corresponding paper about this first version [link]
Why text detoxification is important especially now?

Any of chat-bot is not safe of being toxic at some point (even ChatGPT!). So, if you want to have safe conversations with your users, it is still important to process toxic language.

With our text detoxification technology, you can:

* Before training your language model, chatbot, you can preprocess scrapped training data to ensure that there will be no toxicity. But, you should not just through away your samples. You can detoxify them! Then, the major part of the dataset will not be lost but the content will be saved.
* You can ensure that the user message is also non-toxic. Again, the replica will be saved. Now after detoxification, we will ensure that the conversation will not be transferred into unsafe tone.
* You can cross-save the answers from your chat-bot as well! The conversation will not be stopped even if you chat-bot generates something toxic. Its reply will be detoxified and the user will see neutral answer.

Check out all the info about our research and all models in this repo!
LLMs are everywhere: what other thoughts can we come up with?

This post is the list of alternative sources to read about LLMs and what changes they have brought:

* Choose Your Weapon: Survival Strategies for Depressed AI Academics 🙃 "what should we do know when ChatGPT is here?" has asked probably every student/researcher in NLP academia. This statement paper can provide you several ideas why not to continue😉

* Closed AI Models Make Bad Baselines: We will see how many papers mentioning ChatGPT will appear this ACL. However, Closed models is not the way to do benchmarking in research.

* Towards Climate Awareness in NLP Research: together with the raise of data bases and size of models, our responsibility to the environment also increases. To do modern research, it is nice to report how much of computational time/resource/CO2 emissions were used.

* Step by Step Towards Sustainable AI: if you want to finalize your reading about responsible AI, I really recommend this issue of AlgorithmWatch issue. Professionals from HuggingFace and several German institutions are sharing their thoughts about at what key points we should pay attention to deploy AI safely to humanity and nature.
Language models can explain neurons in language models

What about to use GPT-4 to automatically write explanations for the behavior of neurons in large language models and to score those explanations?

* Explain: Generate an explanation of the neuron’s behavior by showing the explainer model (token, activation) pairs from the neuron’s responses to text excerpts.
* Simulate: Use the simulator model to simulate the neuron's activations based on the explanation.
* Score: Automatically score the explanation based on how well the simulated activations match the real activations.

Blog from Closed OpenAI: [link]
Paper: [link]
Code and collected dataset of explanations: [link]
On the Impossible Safety of Large AI Models

The success hype of LLMs reached not only NLP-related field, but also get into life of normal humans professionals from a lot of different field. However, even I personally, have not seen any use-case where the model perform 100%, or 99.999%, or 99.9%... of the accuracy.

Theoretical proof that it is impossible to build arbitrarily accurate AI model:
https://arxiv.org/abs/2209.15259

Why? TL;DR:

* User-generated data: user-generated data are both mostly unverified and potentially highly sensitive;
* High-dimension memorization: what to achieve better score on more data? You need way more parameters. However, the contexts are limitless. So... we need infinite amount of parameters? The complexity of “fully satisfactory” language processing might be orders of magnitude larger than today’s LLMs, in which case we may still obtain greater accuracy with larger models.
* Highly heterogeneous users: the distribution of texts generated by a given user greatly diverges from the distribution of texts generated by another user. More data, more users, again, more contexts, more data which can be difficult to fully grasp and generalize.
* Sparse heavy-tailed data per user: even we take into account only one user, even their data is not so dense to be generalized. We should expect an especially large empirical heterogeneity in language data, as the samples we obtain from a user can completely stand out from the user’s language distribution.

As a result, LAIM training is unlikely to be easier than mean estimation. The usual objective for ML is to estimate a distribution which is assumed to be normal one where we want to estimate the mean. How many combinations of such distributions are we able to predict?

+ We need to find a balance between accuracy and privacy.

🤔Pretty challenging task. Will we be able to solve it anyway?
A PhD Student’s Perspective on Research in NLP in the Era of Very Large Language Models

As our IFAN project was recommended as one of the promising research direction, I will also recommend in return to read the recent paper to answer the question: "So what now in NLP research if ChatGPT is out?"
Spoiler: the world has not ended and we still have plenty work to do!

https://arxiv.org/abs/2305.12544

From my research work and what I also want to explore, my top list of research directions:

1. Misinformation fight. There is still zero online working automated fake news and propaganda detection systems. However, the risk of misinformation spread is increasing.
2. Multilingualism. A usual reminder, that there is more languages rather then English. Like at least 7k more.
3. Explainability and Interpretabilty. Do we trust models' decisions? Still absolutely far away from 100%. We can help to integrate these models into decisions making process only if their behavior will be transparent. And now think about if we can even explain every NLP task. The methods are absolutely different.
4. Less resources. Less memory to store models and fine-tune them. Less also data to learn! Do we need indeed all these training samples? Or we just need diverse enough data?
5. Human-NLP models interaction. What we can admit is that ChatGPT was the first NLP model used not only by specialists but by everyone. Because it is more or less pleasant and safe to use it. If the model cannot answer some input, it provides anyway nicely written answer. The wrapper is also extremely important. How we need to cover those models that user will be comfortable to work with it? What about children if we want to adjust them for education even from early ages?

Be brave, be creative, be inspired
My PyData&Conf Berlin 2023: Texts Detoxification

It was a pleasure for me to be part of PyData&Conf Berlin 2023 — amazing scientist and developers all over Europe come together to discuss and share experience in cutting edge data science. Of course, there were a lot of talks about LLMs 😉

Firstly, I want to invite you to take a look about my research in texts detoxification. Even with all advances, our models are still actual in the field of toxic speech combating: [video]

Secondly, other I recommend to pay attention to other talks that I personally found interesting:
* Keynote talk: Miroslav Šedivý: Lorem ipsum dolor sit amet. A lot of fun facts about different European languages 😃
* Erin Mikail Staples, Nikolai: Improving Machine Learning from Human Feedback. A lot of attention to HF right now, showcase of a library to help you with it.
* Ines Montani: Incorporating GPT-3 into practical NLP workflows. Told you, a lot of attention to LLMs 😉
* Lev Konstantinovskiy: Prompt Engineering 101. Introduction into LangChain — a powerful library to ease your interaction with LLMs.
* Final recommendation not from NLP: Maren Westermann: How to increase diversity in open source communities. The IT ans DS communities are diverse and spread all over the world. Let's communicate respectfully with each other!

Of course, there are way more! The whole playlist [here]😎
A Benchmark Dataset to Distinguish Human-Written and Machine-Generated Scientific Papers

SCIENTISTS ARE GOING TO SUBMIT PAPERS WRITTEN BY CHATGPT, THE SCIENCE GONNA DIE

Or not?

Our chair work about if we can detect machine-generated or paraphrased articles.
TL;DR: yes, we can, even with logistic regression.

For generation, we tried out: GPT-2, GPT-3, ChatGPT, Galactica, and SciGen.
Article looks like: Abstract + Intro + Conclusion.

🤗dataset with ~70k rows of generated scientific texts by different models;
There, you can also find fine-tuned 🤗Galactica and 🤗RoBERTa for detection.

The full paper with all tables of results and explainability investigations [link]
Happy New Year 2024

Thank you for being interested in NLP and my view on it 🤩

For new year, I have some new ideas for the community -- stay tuned 😉

Be professional, believe in yourself, be open for new ideas, and all other positive tokens in your texts 🥳
Ukrainian Toxicity Classification

I am glad to announce the first of its kind dataset for detection toxicity in Ukrainian🇺🇦 (~20k rows):
https://huggingface.co/datasets/ukr-detect/ukr-toxicity-dataset

Together with fine-tuned on it xlm-roberta-base:
https://huggingface.co/ukr-detect/ukr-toxicity-classifier

Happy to contribute to Ukrainian NLP💪

The work is done together with the amazing Masters student Valeriia Khylenko!
ELLIS Winter School on Foundation Models

Amsterdam 12-15th March
https://amsterdam-fomo.github.io/

Foundation Models, and their origin, analysis and development have been typically associated with the US and Big Tech. Yet, a critical share of important insights and novel approaches do come from Europe, both within academia and industry. Part of this winter school's goal is to highlight these fresh perspectives and give the students an in-depth look into how Europe is guiding its own research agenda with unique directions and bringing together the community. The workshop will take place at the University of Amsterdam.

Lectures from top researchers from DeepMind, Google Research, and top EU unis.

Deadline to apply: 15th February 2024 23:59 CET
Artificial Intelligence 2023 Playlist

Stanford series that brought together Chris Manning, Andrew Ng, Fei-Fei Li, and other researchers from Stanford to discuss the state of NLP:
https://youtube.com/playlist?list=PLoROMvodv4rPEjA3yzoqkq3J321MfH7FZ&si=eUEZC-4K3X0Ap074

I recommend for casual watching at times you are asking yourself "What's next?"

Especially:
* Chris Manning and Endrew Ng discussion about NLP.
* Andrew Ng and Fei-Fei li discussion about human-centered AI.
Ukrainian Texts Classification Corpora p2

We continue to enrich datasets for the classification of texts in the Ukrainian language. This time, we worked on the translation of English-language data into Ukrainian and obtained:

1. Ukrainian NLI corpus: https://huggingface.co/datasets/ukr-detect/ukr-nli-dataset-translated-stanford translated from Stanford SNLI.
2. Ukrainian Formality corpus: https://huggingface.co/datasets/ukr-detect/ukr-formality-dataset-translated-gyafc translated from English GYAFC
3. In addition to the toxicity corpus presented previously, translated data from the English Jigsaw Toxicity Classification dataset https://huggingface.co/datasets/ukr-detect/ukr-toxicity-dataset-translated-jigsaw

You are very welcomed to use and test them😉
TextDetox CLEF 2024

We are glad to invite you to participate in the first of its kind multilingual Text Detoxification shared task!

https://pan.webis.de/clef24/pan24-web/text-detoxification.html

TL;DR
Task formulation: transfer a text style from toxic to neutral (i.e. what a f**k is this about? -> what is this about?)
9 Languages: English, Spanish, Chinese, Hindi, Arabic, German, Russian, Ukrainian, and Amharic
🤗 https://huggingface.co/textdetox

More details:

Identification of toxicity in user texts is an active area of research. Today, social networks such as Facebook, Instagram are trying to address the problem of toxicity. However, they usually simply block such kinds of texts. We suggest a proactive reaction to toxicity from the user. Namely, we aim at presenting a neutral version of a user message which preserves meaningful content. We denote this task as text detoxification.

In this competition, we suggest you create detoxification systems for 9 languages from several linguistic families. However, the availability of training corpora will differ between the languages. For English and Russian, the parallel corpora of several thousand toxic-detoxified pairs (as presented above) are available. So, you can fine-tune text generation models on them. For other languages, for the dev phase, no such corpora will be provided. The main challenge of this competition will be to perform both supervised and unsupervised cross-lingual detoxification.

You are very welcome to test all modern LLMs on text detoxification and safety with our data as well as experiment with different unsupervised approaches based on MLMs or other paraphrasing methods!

The final leaderboard will be built on a manual evaluation of a test set subset performed via crowdsourcing at Toloka.ai platform.

In the end, you will have an opportunity to write and then present a paper at CLEF 2024 (https://clef2024.imag.fr/) which will take place in Grenoble, France!

Important Dates
February 1, 2024: First data available and run submission opens.
April 22, 2024: Registration closes.
May 6, 2024: Run submission deadline and results out.
May 31, 2024: Participants paper submission.
July 8, 2024: Camera-ready participant papers submission.
September 9-12, 2024: CLEF Conference in Grenoble and Touché Workshop.
Has It All Been Solved? Open NLP Research Questions Not Solved by Large Language Models

PhD application season is starting. If you were afraid, that the only topic you will be suggested is only to prompt LLMs, here are good scientifically proved news for you — there are still plenty to do in NLP!

Amazing colleagues from the Michigan University has prepared a list of still open NLP research questions, 45 of them! Including:
* Multilinguality
* Reasoning
* Knowledge Bases
* Language Grounding
* Computational Social Science
* Online Environments
* Child Language Acquisition
* Non-verbal Communication
* Synthetic Datasets
* Interpretability
* Efficient NLP
* NLP in Education
* NLP in Healthcare
* NLP and Ethics

Yes, in some direction, we have gone already a long way, so other topics are becoming important and just possible already to explore

Check the full text (is appearing at COLING):
https://arxiv.org/abs/2305.12544

P.S. And I am reminding, that we are having multilingual safe-language important shared task on texts detoxification — start you first research experiments now😉
Towards NLP🇺🇦
TextDetox CLEF 2024 We are glad to invite you to participate in the first of its kind multilingual Text Detoxification shared task! https://pan.webis.de/clef24/pan24-web/text-detoxification.html TL;DR Task formulation: transfer a text style from toxic…
TextDetox CLEF 2024: Final week of the dev phase

We would like to remind you that this week is a final week of the dev phase of our multilingual TextDetox shared task:
https://pan.webis.de/clef24/pan24-web/text-detoxification.html
🤗https://huggingface.co/textdetox

On April, 22nd, the official registration to CLEF-2024 will be closed, so, please, register here if you have not done this yet:
https://clef2024-labs-registration.dei.unipd.it/

Also, we would like to remind you that still the dev phase leaderboard is open and you are welcomed to make your submission!
Please, submit to Codalab:
https://codalab.lisn.upsaclay.fr/competitions/18243
or to TIRA (as an additional option in case of technical problems):
https://www.tira.io/task/pan24-text-detoxification

Otherwise, stay tuned for the test set release!
TextDetox CLEF 2024: Test Phase

Our shared task on multilingual text detoxification is ongoing and reaching its final phase😉

We are releasing the parallel pairs for the dev part:
https://huggingface.co/datasets/textdetox/multilingual_paradetox

and new toxic sentences for the test part:
https://huggingface.co/datasets/textdetox/multilingual_paradetox_test

We are waiting for you submission here:
https://codalab.lisn.upsaclay.fr/competitions/18243
till May 12th🤗

You can submit for ANY language! There are 9 of them: English, Spanish, German, Chinese, Arabic, Hindi, Ukrainian, Russian, and Amharic.