Machine learning books and papers
22.8K subscribers
975 photos
54 videos
928 files
1.32K links
Admin: @Raminmousa
Watsapp: +989333900804
ID: @Machine_learn
link: https://t.me/Machine_learn
Download Telegram
📄 Advances of Artificial Intelligence in Anti-Cancer Drug Design: A Review of the Past Decade



📎 Study the paper

@Machine_learn
👍2
Forwarded from Papers
يكي از بهترين موضوعات در طبقه بندي متن؛ تحليل احساس چند دامنه اي مي باشد. براي اين منظور مدلي تحت عنوان
Title: TRCAPS: The Transformer-based Capsule Approach for Persian Multi-
Domain Sentiment Analysis
طراحي كرديم كه نتايج خيلي بهتري نسبت به IndCaps داشته است.
دوستاني كه نياز به مقاله تو حوزه NLP دارن مي تونن تا اخر اين هفته داخل اين مقاله شركت كنند.

ژورنال هدف Array elsevier مي باشد.

شركت كنندگان داخل اين مقاله نياز به انجام تسك هايي نيز مي باشند.

@Raminmousa
@Machine_learn
@Paper4money
👍4
Linear Algebra Done Right

📓 Book

@Machine_learn
4👍4
💡 Ultimate Guide to Fine-Tuning LLMs

📚 link

@Machine_learn
👍2
LLM Engineer's Handbook: Master the art of engineering Large Language Models from concept to production.

🖥 Github

@Machine_learn
Please open Telegram to view this post
VIEW IN TELEGRAM
👍3🔥1
Forwarded from Papers
👍1
فقط نفر ۲ و ۴ از این باقی مونده ....!
Please open Telegram to view this post
VIEW IN TELEGRAM
👍3
📑 A guide to RNA sequencing and functional analysis


📎 Study the paper

@Machine_learn
👍41
The State of AI Report

📚 Report

@Machine_learn
👍2
NotebookLlama: An Open Source version of NotebookLM

📚 Book

@Machine_learn
5
Tutorial on Diffusion Models for Imaging and Vision

📚 Book

@Machine_learn
5👍2
An Infinite Descent into Pure Mathematics

📚 Book

@Machine_learn
👍31
Forwarded from Github LLMs
🌟 Zamba2-Instruct

В семействе 2 модели:

🟢Zamba2-1.2B-instruct;
🟠Zamba2-2.7B-instruct.



# Clone repo
git clone https://github.com/Zyphra/transformers_zamba2.git
cd transformers_zamba2

# Install the repository & accelerate:
pip install -e .
pip install accelerate

# Inference:
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

tokenizer = AutoTokenizer.from_pretrained("Zyphra/Zamba2-2.7B-instruct")
model = AutoModelForCausalLM.from_pretrained("Zyphra/Zamba2-2.7B-instruct", device_map="cuda", torch_dtype=torch.bfloat16)

user_turn_1 = "user_prompt1."
assistant_turn_1 = "assistant_prompt."
user_turn_2 = "user_prompt2."
sample = [{'role': 'user', 'content': user_turn_1}, {'role': 'assistant', 'content': assistant_turn_1}, {'role': 'user', 'content': user_turn_2}]
chat_sample = tokenizer.apply_chat_template(sample, tokenize=False)

input_ids = tokenizer(chat_sample, return_tensors='pt', add_special_tokens=False).to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=150, return_dict_in_generate=False, output_scores=False, use_cache=True, num_beams=1, do_sample=False)
print((tokenizer.decode(outputs[0])))





🖥GitHub

https://t.me/deep_learning_proj
Please open Telegram to view this post
VIEW IN TELEGRAM
👍51
📕 Applied Causal #Inference Powered by #MachineLearning

📌Book

@Machine_learn
👍2
THINKING LLMS: GENERAL INSTRUCTION FOLLOWING WITH THOUGHT GENERATION

📚 Reed

@Machine_learn
👍1
با عرض سلام امروز اخرين وقت براي مشاركت در اين مقاله مي باشد...!
👍1