Forwarded from Студентський математичний семінар
"Set records. Break records. Shatter records". 😎
Любі друзі, ця цитата сьогодні про нас усіх, про наш невпинний Студматсемінар!
Рекордне одинадцяте засідання СМС протягом одного сезону відбудеться в Сб, 20.05 об 11:00.
Тема: "Topological Machine Learning from WL algorithm to Simplicial Neural Networks".
Доповідач: Олександр Яворський (@svefn), аспірант 1-го р.н. кафедри Математичного Моделювання та Аналізу Даних у КПІ, Machine Learning Researcher у Knowledgator.
Зум: https://zoom.us/j/5197673308?pwd=eGRtaVIzbHlNT3RoRjc5U2FsVENGUT09
Запрошуємо всіх на захопливу доповідь про топологічні методи Машинного навчання та узагальнення графових нейронних мереж. Буде гаряче!
Любі друзі, ця цитата сьогодні про нас усіх, про наш невпинний Студматсемінар!
Рекордне одинадцяте засідання СМС протягом одного сезону відбудеться в Сб, 20.05 об 11:00.
Тема: "Topological Machine Learning from WL algorithm to Simplicial Neural Networks".
Доповідач: Олександр Яворський (@svefn), аспірант 1-го р.н. кафедри Математичного Моделювання та Аналізу Даних у КПІ, Machine Learning Researcher у Knowledgator.
Зум: https://zoom.us/j/5197673308?pwd=eGRtaVIzbHlNT3RoRjc5U2FsVENGUT09
Запрошуємо всіх на захопливу доповідь про топологічні методи Машинного навчання та узагальнення графових нейронних мереж. Буде гаряче!
👍6
Sounds emitted by plants under stress are airborne and informative
Abstract:
Stressed plants show altered phenotypes, including changes in color, smell, and shape. Yet, airborne sounds emitted by stressed plants have not been investigated before. Here we show that stressed plants emit airborne sounds that can be recorded from a distance and classified. We recorded ultrasonic sounds emitted by tomato and tobacco plants inside an acoustic chamber, and in a greenhouse, while monitoring the plant’s physiological parameters. We developed machine learning models that succeeded in identifying the condition of the plants, including dehydration level and injury, based solely on the emitted sounds. These informative sounds may also be detectable by other organisms. This work opens avenues for understanding plants and their interactions with the environment and may have significant impact on agriculture.
https://www.cell.com/cell/fulltext/S0092-8674(23)00262-3
Abstract:
Stressed plants show altered phenotypes, including changes in color, smell, and shape. Yet, airborne sounds emitted by stressed plants have not been investigated before. Here we show that stressed plants emit airborne sounds that can be recorded from a distance and classified. We recorded ultrasonic sounds emitted by tomato and tobacco plants inside an acoustic chamber, and in a greenhouse, while monitoring the plant’s physiological parameters. We developed machine learning models that succeeded in identifying the condition of the plants, including dehydration level and injury, based solely on the emitted sounds. These informative sounds may also be detectable by other organisms. This work opens avenues for understanding plants and their interactions with the environment and may have significant impact on agriculture.
https://www.cell.com/cell/fulltext/S0092-8674(23)00262-3
👍1
The False Promise of Imitating Proprietary LLMs
Abstract:
An emerging method to cheaply improve a weaker language model is to finetune it on outputs from a stronger model, such as a proprietary system like ChatGPT (e.g., Alpaca, Self-Instruct, and others). This approach looks to cheaply imitate the proprietary model's capabilities using a weaker open-source model. In this work, we critically analyze this approach. We first finetune a series of LMs that imitate ChatGPT using varying base model sizes (1.5B--13B), data sources, and imitation data amounts (0.3M--150M tokens). We then evaluate the models using crowd raters and canonical NLP benchmarks. Initially, we were surprised by the output quality of our imitation models -- they appear far better at following instructions, and crowd workers rate their outputs as competitive with ChatGPT. However, when conducting more targeted automatic evaluations, we find that imitation models close little to none of the gap from the base LM to ChatGPT on tasks that are not heavily supported in the imitation data. We show that these performance discrepancies may slip past human raters because imitation models are adept at mimicking ChatGPT's style but not its factuality. Overall, we conclude that model imitation is a false promise: there exists a substantial capabilities gap between open and closed LMs that, with current methods, can only be bridged using an unwieldy amount of imitation data or by using more capable base LMs. In turn, we argue that the highest leverage action for improving open-source models is to tackle the difficult challenge of developing better base LMs, rather than taking the shortcut of imitating proprietary systems.
> We show that these performance discrepancies may slip past human raters because imitation models are adept at mimicking ChatGPT's style but not its factuality
https://arxiv.org/pdf/2305.15717
Abstract:
An emerging method to cheaply improve a weaker language model is to finetune it on outputs from a stronger model, such as a proprietary system like ChatGPT (e.g., Alpaca, Self-Instruct, and others). This approach looks to cheaply imitate the proprietary model's capabilities using a weaker open-source model. In this work, we critically analyze this approach. We first finetune a series of LMs that imitate ChatGPT using varying base model sizes (1.5B--13B), data sources, and imitation data amounts (0.3M--150M tokens). We then evaluate the models using crowd raters and canonical NLP benchmarks. Initially, we were surprised by the output quality of our imitation models -- they appear far better at following instructions, and crowd workers rate their outputs as competitive with ChatGPT. However, when conducting more targeted automatic evaluations, we find that imitation models close little to none of the gap from the base LM to ChatGPT on tasks that are not heavily supported in the imitation data. We show that these performance discrepancies may slip past human raters because imitation models are adept at mimicking ChatGPT's style but not its factuality. Overall, we conclude that model imitation is a false promise: there exists a substantial capabilities gap between open and closed LMs that, with current methods, can only be bridged using an unwieldy amount of imitation data or by using more capable base LMs. In turn, we argue that the highest leverage action for improving open-source models is to tackle the difficult challenge of developing better base LMs, rather than taking the shortcut of imitating proprietary systems.
> We show that these performance discrepancies may slip past human raters because imitation models are adept at mimicking ChatGPT's style but not its factuality
https://arxiv.org/pdf/2305.15717
👍1👌1
Forwarded from 📚kruasan's library
Milky Eggs
Standardized exams measure intrinsic ability, not racial or socioeconomic privilege
Recently, the usage of standardized testing has fallen under deep scrutiny in America. [...] I outline a clear, step-by-step argument that lays out a strong case for the pro-standardized testing viewpoint. [...] Upon a comprehensive review of the literature…
Forwarded from Axis of Ordinary
IQ scores by ethnic group in a nationally-representative sample of 10-year old American children https://humanvarieties.org/2023/05/27/iq-scores-by-ethnic-group-in-a-nationally-representative-sample-of-10-year-old-american-children/
👍5🤔2⚡1🔥1
Forwarded from Axis of Ordinary
Improving Mathematical Reasoning with Process Supervision https://openai.com/research/improving-mathematical-reasoning-with-process-supervision
"We've trained a model to achieve a new state-of-the-art in mathematical problem solving by rewarding each correct step of reasoning (“process supervision”) instead of simply rewarding the correct final answer (“outcome supervision”). In addition to boosting performance relative to outcome supervision, process supervision also has an important alignment benefit: it directly trains the model to produce a chain-of-thought that is endorsed by humans."
"We've trained a model to achieve a new state-of-the-art in mathematical problem solving by rewarding each correct step of reasoning (“process supervision”) instead of simply rewarding the correct final answer (“outcome supervision”). In addition to boosting performance relative to outcome supervision, process supervision also has an important alignment benefit: it directly trains the model to produce a chain-of-thought that is endorsed by humans."
👍2