Ai Events️
5.96K subscribers
950 photos
83 videos
26 files
763 links
This channel aims to cover all events related to artificial intelligence, data science, etc.
Hamid Mahmoodabadi

در این کانال تمام رویدادهای مرتبط با هوش مصنوعی، علوم داده و ... پوشش داده می‌شود.
حمید محمودآبادی

Contact me:
@MahmoodabadiHamid
Download Telegram
When allocating scarce resources with AI, randomization can improve fairness

*The use of machine-learning models to allocate scarce resources or opportunities can be improved by introducing randomization into the decision-making process.*
*Researchers from MIT and Northeastern University argue that traditional fairness methods, such as adjusting features or calibrating scores, are insufficient to address structural injustices and inherent uncertainties.*
*The introduction of randomization can prevent one deserving person or group from always being denied a scarce resource, and can be especially beneficial in situations involving uncertainty or repeated negative decisions.*
Read more

@Ai_Events
.
👍1
This media is not supported in your browser
VIEW IN TELEGRAM
Painter Van Gogh's work was animated with artificial intelligence.

@Ai_Events
👍1
MIT researchers advance automated interpretability in AI models

Understanding AI Systems
Artificial intelligence models are becoming increasingly prevalent, and understanding how they work is crucial for auditing and improving their performance. MIT researchers developed MAIA, a system that automates the interpretation of artificial vision models. MAIA can label individual components, identify biases, and even design experiments to test hypotheses.
MAIA in Action
MAIA demonstrates its ability to tackle three key tasks, including labeling individual components, cleaning up image classifiers, and hunting for hidden biases. For example, MAIA was asked to describe the concepts that a particular neuron inside a vision model is responsible for detecting. MAIA uses tools to design experiments and test hypotheses, providing a comprehensive answer.
Limitations and Future Directions
While MAIA is a significant step forward in interpretability, it has limitations. For instance, its performance is limited by the quality of the tools it uses and can sometimes display confirmation bias. Future directions include scaling up the method to apply it to human perception and developing tools to overcome its limitations.
Here is the text without HTML tags:
As artificial intelligence models become increasingly prevalent, and are integrated into diverse sectors like health care, finance, education, transportation, and entertainment, understanding how they work under the hood is critical. Imagine if we could directly investigate the human brain by manipulating each of its individual neurons to examine their roles in perceiving a particular object.
...
Read more

@Ai_Events
.
👍21🤡1
🔊رویداد رونمایی از ابزار هوش مصنوعی دادماتولز به صورت اوپن سورس

در این رویداد ابزار NLP دادماتولز به صورت رسمی آزادرسانی شده و برنامه‌های توسعه جمعی با مشارکت دانشگاه و بخش خصوصی و حمایت‌های دولتی مطرح می‌شود.


با گردهمایی بزرگ متخصصان NLP کشور همراه باشید



🔹زمان:
دوشنبه ۱۵ مرداد ساعت ۱۰ الی ۱۲

🔹مکان:

صندوق نوآوری و شکوفایی، سالن آمفی تئاتر

📎لینک ثبت نام:
https://evand.com/events/dadmatools


@Ai_Events
🤡1
MLx Generative AI (Theory, Agents, Products)

Dates: 22-24 August 2024 (3 days)

Location: London School of Economics (LSE) & Online

Register: www.oxfordml.school/genai

Deadline: 12th August

- Perfect for professionals, researchers, and students looking to stay ahead in the rapidly evolving field of GenAI.
- Upon completion, participants will receive CPD-accredited certificates.
- For any enquiries contact us on contact@oxfordml.school

@Ai_Events
👍1🤩1🤡1😍1
AI model identifies certain breast tumor stages likely to progress to invasive cancer


The researchers from MIT and ETH Zurich have developed an AI model that can identify the different stages of ductal carcinoma in situ (DCIS) from a cheap and easy-to-obtain breast tissue image.


The model uses a dataset containing 560 tissue sample images from 122 patients at three different stages of disease to train and test the AI model. It identifies eight states that are important markers of DCIS and determines the proportion of cells in each state in a tissue sample.


However, the researchers found that just having the proportions of cells in every state is not enough, and the organization of cells also changes. They designed the model to consider proportion and arrangement of cell states, which significantly boosted its accuracy.


The model has clear agreement with samples evaluated by a pathologist in many instances and could provide valuable information about features in a tissue sample, like the organization of cells, that a pathologist could use in decision-making.

Read more

@Ai_Events
.
👍3
Large language models don’t behave like people, even though we may expect them to


Researchers from MIT created a framework to evaluate LLMs based on their alignment with human beliefs about their capabilities. They found that when models are misaligned, users may be overconfident or underconfident, leading to unexpected failures. The study also showed that more capable models tend to perform worse in high-stakes situations due to this misalignment.
Another important finding is:
The researchers introduced the concept of "human generalization," where people form beliefs about an LLM's capabilities based on their interactions. They found that humans are worse at generalizing for LLMs than for people, and that this can lead to misalignment between human beliefs and model performance.
The study also highlights the importance of:
Understanding how people form beliefs about LLMs is crucial for deploying them effectively. The researchers hope to conduct more studies on this topic and develop ways to incorporate human generalization into the development of LLMs.
Read more

@Ai_Events
.
Argentina is implementing artificial intelligence to predict and prevent future crimes

The Ministry of Security is setting up a specialized unit involving members
of the Federal Police and other security forces. The main task of this
unit will be to use machine learning algorithms to analyze historical
crime data to forecast future criminal activities and to monitor social
networks for potential criminal communications. Despite government
assurances, this initiative has raised skepticism and concern among the
public.

Source

@Ai_Events
👎2👍1
AI method radically speeds predictions of materials’ thermal properties

Researchers developed a virtual node graph neural network (VGNN) to predict phonon dispersion relations. This approach is more efficient than traditional methods and can be used to predict phonons directly from a material's atomic coordinates.

The VGNN uses virtual nodes to represent phonons, which allows it to skip complex calculations and make the method more efficient. The researchers proposed three versions of VGNNs with increasing complexity, each of which can be used to predict phonons directly from a material's atomic coordinates.

The VGNN method is not limited to phonons and can also be used to predict challenging optical and magnetic properties. The researchers plan to refine the technique to capture small changes that can affect phonon structure in the future.

The work has the potential to accelerate the design of more efficient energy generation systems and improve the development of more efficient microelectronics.
Read more

@Ai_Events
.
OpenAI is developing a new tool aimed at detecting students using ChatGPT for assignments, but its release remains uncertain.

Last year, OpenAI introduced an "AI text detector," which was discontinued due to its "low accuracy." The new watermarking method promises high accuracy and targets identifying texts generated by ChatGPT through minor alterations in wordings. However, existing issues with tampering and correction highlight the approach's vulnerability. There's also concern that using watermarking could stigmatize AI among non-native English speakers.

Source


@Ai_Events
Your thoughts?


@Ai_Events
👍6👎2
Creating and verifying stable AI-controlled systems in a rigorous and flexible way

Researchers have developed new techniques to rigorously certify Lyapunov calculations in complex systems, enabling safer deployment of robots and autonomous vehicles. The approach efficiently searches for and verifies a Lyapunov function, providing stability guarantees for the system. This has potential wide-ranging applications, including ensuring a smoother ride for autonomous vehicles and drones.
The researchers found a frugal shortcut to the training and verification process, generating cheaper counterexamples and optimizing the robotic system to account for them. They also developed a novel verification formulation that enables the use of a scalable neural network verifier, α,β-CROWN, to provide rigorous worst-case scenario guarantees beyond the counterexamples.
The technique is general and could be applied to other applications, such as biomedicine and industrial processing. The researchers are exploring how to improve performance in systems with higher dimensions and account for data beyond lidar readings.
Read more

@Ai_Events
.
👍1
👍2
We need to prepare for ‘addictive intelligence’!

AI companions like Replika offer users a chance to connect with holographic copies of deceased loved ones. But experts warn that these interactions can be addictive, thanks to AI's ability to cater to our desires and mirror our emotions. As AI becomes more advanced, it's essential to investigate the incentives driving its development and create policies to address potential harms.
Read more

@Ai_Events
.
Google DeepMind trained a robot to beat humans at table tennis

A new table tennis bot developed by Google DeepMind has beaten all beginner-level human opponents and 55% of those playing at amateur level. Although it lost to advanced players, it's an impressive advance. The bot's abilities can be applied to real-world tasks like performing useful tasks in homes and warehouses. Researchers used a two-part approach to train the system to mimic human skills like hand-eye coordination and quick decision-making.
Read more

@Ai_Events
.
👍3
AI “godfather” Yoshua Bengio has joined a UK project to prevent AI catastrophes


Researchers at Safeguarded AI aim to build AI systems that offer quantitative guarantees about their impact on the world. They're using mathematical analysis to supplement human testing, ensuring AI systems operate as intended. The team hopes to create a 'gatekeeper' AI that reduces safety risks in high-stakes sectors like transport and energy. Without AI safeguarding AI, complex systems will be too complicated to analyze manually.
Read more 

@Ai_Events
.
👍3👎1
How to Choose a right graph for Data Visualization

@Ai_Events
👏2
Overcoming Obstacles to Enterprise-Wide AI Deployment

Only 5.4% of US businesses use AI for product or service in 2024. Scaling AI requires strategic transitions in infrastructure, data governance, and supplier ecosystems. AI readiness spending is set to rise, with 9 in 10 companies increasing AI spending. Data liquidity and quality are crucial for AI deployment, with 50% of companies citing data quality as the most limiting factor. Companies are willing to pause AI adoption if it ensures safety and security.
Read more

@Ai_Events
.
1
MIT Researchers Find Potential in Using Large Language Models for Anomaly Detection

MIT researchers are exploring the potential of using Large Language Models (LLMs) for anomaly detection in time-series data. The approach, called SigLLM, involves converting time-series data into text-based inputs that LLMs can process. The researchers found that LLMs can be used to identify anomalies in wind farm data with minimal training required. While LLMs didn't outperform state-of-the-art deep learning models, they showed promise as a less expensive and more efficient option. Future work aims to improve performance, speed, and understanding of LLM performance in anomaly detection.


@Ai_Events
.
👍2👏1
Trump Falsely Accuses Kamala Harris of Using AI-Generated Crowds

Former US President Donald Trump has made a baseless attack on Kamala Harris' presidential campaign, claiming she 'A.I.'d' photos of a massive crowd that showed up to see her speak at a Detroit airport campaign rally.

Despite the image being an actual photo of a 15,000-person crowd, Trump falsely accused Harris of cheating and using AI-generated images to deceive voters.

The accusation marks the first time a US presidential candidate has personally raised the specter of AI-generated fakery by an opponent, highlighting widespread fears and misunderstandings over online information in the AI age.

To identify authentic images, it's essential to verify information through multiple sources, including news outlets, journalists, and attendees who were present at the event. In this case, numerous sources, including the AP, Getty, and local news outlets, confirmed the large crowds at the rally.

The incident serves as a guide on how to fact-check online information, especially as AI tools become increasingly good at generating photorealistic images.

@Ai_Events
.
👍2
Google's Pixel 9 Enhances Camera with AI Capabilities

Google's Pixel smartphones have been renowned for their exceptional camera systems, and the tech giant has taken it a step further by incorporating artificial intelligence features that expand its capabilities. The latest Pixel 9 series boasts more generative AI capabilities that can alter, improve, and enhance your photos.

The Pixel 9 series has completely rebuilt its HDR+ pipeline, a crucial image processing algorithm ensuring images have the right levels of contrast, exposure, colors, and shadows. New features like Add Me, Reimagine, Autoframe, and Zoom Enhance go beyond the capture stage, making it easier for anyone to perform tasks that previously required photo-editing skills.

Add Me enables users to take selfies with loved ones in front of a subject, such as the Eiffel Tower, without having to hand over the phone. This mode works by scanning the area briefly, snapping a picture, and then swapping places to capture the desired shot.

Reimagine is the latest addition to Google's Magic Editor, allowing users to select an area of a photo and input a text prompt to achieve the desired outcome, such as turning daytime photos to nighttime or adding stormy clouds.

These AI capabilities make it easier for anyone to manipulate their photos, eliminating the need for extensive editing knowledge. With the Pixel 9 series, Google is revolutionizing the way we capture and edit our memories.

@Ai_Events
.
2