TopTgm
machine-learning-books-and-papers

Machine learning books and papers

Locale: en
Subscribers:17.7K
Description:
Admin: https://t.me/Machine_learn
pinned «با عرض سلام خيلي از دوستان در رابطه با طراحي صفر تا صد پروژه هاي ديپ از بنده سوال پرسيدن داخل پك زير ٣٦ پروژه رو با جزئيات شرح دادم: 1-Deep Learning Basic -01_Introduction --01_How_TensorFlow_Works 2-Classification apparel -Classification apparel double…»
11/17/2024, 11:48:01 AM
با عرض سلام خيلي از دوستان در رابطه با طراحي صفر تا صد پروژه هاي ديپ از بنده سوال پرسيدن داخل پك زير ٣٦ پروژه رو با جزئيات شرح دادم:

1-Deep Learning Basic
-01_Introduction
--01_How_TensorFlow_Works
2-Classification apparel
-Classification apparel double capsule
-Classification apparel double cnn
3-ALZHEIMERS USING CNN(ResNet)
4-Fake News (Covid-19 dataset)
-Multi-channel
-3DCNN model
-Base line+ Char CNN
-Fake News Covid CapsuleNet
5-3DCNN Fake News
6-recommender systems
-GRU+LSTM MovieLens
7-Multi-Domain Sentiment Analysis
-Dranziera CapsuleNet
-Dranziera CNN Multi-channel
-Dranziera LSTM
8-Persian Multi-Domain SA
-Bi-GRU Capsule Net
-Multi-CNN
9-Recommendation system
-Factorization Recommender, Ranking Factorization Recommender, Item Similarity Recommender (turicreate)
-SVD, SVD++, NMF, Slope One, k-NN, Centered k-NN, k-NN Baseline, Co-Clustering(surprise)
10-NihX-Ray
-optimized CNN on FullDataset Nih-Xray
-MobileNet
-Transfer learning
-Capsule Network on FullDataset Nih-Xray
دوستاني كه نياز به اين پروژه ها دارن ميتونن با بنده در ارتباط باشن.

11/17/2024, 11:47:56 AM
با عرض سلام مقاله زیر در مرحله ی اولیه ارسال می باشد. نفرات 2و ۳ خالی می باشد. دوستانی که نیاز دارند می تونن به ایدی بنده پیام بدن. همچنین امکان ریکام‌دادن بعد اتمام کار وجود داره.

Title:
Automated Concrete Crack Detection and Geometry Measurement Using YOLOv8
Description:
This paper presents a comprehensive approach for automatic detection and quantification of concrete cracks using the YOLOv8 deep learning model. By leveraging advanced object detection capabilities, our system identifies concrete cracks in real-time with high accuracy, addressing challenges of complex backgrounds and varying crack patterns. Following crack detection, we employ image processing techniques to measure key geometric parameters such as width, length, and area. This integrated system enables rapid, precise analysis of structural integrity, offering a scalable solution for infrastructure monitoring and maintenance.

Target Journal:
Nature, Scientific Reports



11/10/2024, 10:18:59 AM
با عرض سلام مقاله زیر در مرحله ی اولیه ارسال می باشد. نفرات ۱ تا ۳ جایگاه ها خالی می باشد. دوستانی که نیاز دارند می تونن به ایدی بنده پیام بدن. Title: Automated Concrete Crack Detection and Geometry Measurement Using YOLOv8 Description: This paper presents…
11/9/2024, 10:41:35 AM
با عرض سلام مقاله زیر در مرحله ی اولیه ارسال می باشد. نفرات ۱ تا ۳ جایگاه ها خالی می باشد. دوستانی که نیاز دارند می تونن به ایدی بنده پیام بدن.

Title:
Automated Concrete Crack Detection and Geometry Measurement Using YOLOv8
Description:
This paper presents a comprehensive approach for automatic detection and quantification of concrete cracks using the YOLOv8 deep learning model. By leveraging advanced object detection capabilities, our system identifies concrete cracks in real-time with high accuracy, addressing challenges of complex backgrounds and varying crack patterns. Following crack detection, we employ image processing techniques to measure key geometric parameters such as width, length, and area. This integrated system enables rapid, precise analysis of structural integrity, offering a scalable solution for infrastructure monitoring and maintenance.

Target Journal:
Nature, Scientific Reports



11/9/2024, 10:37:04 AM
Smol TTS models are here! OuteTTS-0.1-350M - Zero shot voice cloning, built on LLaMa architecture, CC-BY license! 🔥

> Pure language modeling approach to TTS
> Zero-shot voice cloning
> LLaMa architecture w/ Audio tokens (WavTokenizer)
> BONUS: Works on-device w/ llama.cpp

Three-step approach to TTS:

> Audio tokenization using WavTokenizer (75 tok per second).
> CTC forced alignment for word-to-audio token
> Structured prompt creation w/ transcription, duration, audio tokens.



11/7/2024, 7:55:42 AM
الحمدالله تو اين بازه ٣ ماه تونستيم مقالات مشاركتي رو تحت وظايف زير انجام بديم:
ثبت ٤ مقاله در حوزه Multi-modal wond classification

ارائه ی دو مقاله در حوزه ی breast cancer segmentation

ارائه ی سه مقاله در حوزه ی cancer detection
که ۸۰٪ مراحل این مقالات هم تموم شده.

به زودی پس از اتمام این مقالات لیستی از مقالات مشارکتی رو خواهیم داشت .

11/4/2024, 6:09:57 AM
is a repository of papers on the topic of agents based on large language models (LLM)! The papers are divided into categories such as LLM agent architectures, autonomous LLM agents, reinforcement learning (RL), natural language processing methods, multimodal approaches and tools for developing LLM agents, and more.



11/4/2024, 4:37:28 AM
Title:BERTCaps: BERT Capsule for persian Multi-domain Sentiment Analysis.

Abstract:
Sentiment classification is widely known as a domain-dependent problem. In order to learn an accurate domain-specific sentiment classifier, a large number of labeled samples are needed, which are expensive and time-consuming to annotate. Multi-domain sentiment analysis based on multi-task learning can leverage labeled samples in each single domain, which can alleviate the need for large amount of labeled data in all domains. In this article, the purpose is BERTCaps to provide a multi-domain classifier. In this model, BERT was used for Instance Representation and Capsule was used for instance learning. In the evaluation dataset, the model was able to achieve an accuracy of 0.9712 in polarity classification and an accuracy of 0.8509 in domain classification.

journal:
If:2.3

جايگاه ٢ و ٤ اين مقاله رو نياز داريم.
دوستاني كه مايل به شركت هستن مي تونن به ايدي بنده پيام بدن.


11/3/2024, 9:26:51 AM
Ms - SmolLM2 1.7B - beats Qwen 2.5 1.5B & Llama 3.21B, Apache 2.0 licensed, trained on 11 Trillion tokens 🔥

> 135M, 360M, 1.7B parameter model
> Trained on FineWeb-Edu, DCLM, The Stack, along w/ new mathematics and coding datasets
> Specialises in Text rewriting, Summarization & Function Calling
> Integrated with transformers & model on the hub!

You can run the 1.7B in less than 2GB VRAM on a Q4 👑

Fine-tune, run inference, test, train, repeat - intelligence is just 5 lines of code away!



11/1/2024, 2:58:02 PM
Title:BERTCaps: BERT Capsule for persian Multi-domain Sentiment Analysis.

Abstract:
Sentiment classification is widely known as a domain-dependent problem. In order to learn an accurate domain-specific sentiment classifier, a large number of labeled samples are needed, which are expensive and time-consuming to annotate. Multi-domain sentiment analysis based on multi-task learning can leverage labeled samples in each single domain, which can alleviate the need for large amount of labeled data in all domains. In this article, the purpose is BERTCaps to provide a multi-domain classifier. In this model, BERT was used for Instance Representation and Capsule was used for instance learning. In the evaluation dataset, the model was able to achieve an accuracy of 0.9712 in polarity classification and an accuracy of 0.8509 in domain classification.

journal:
If:2.3

جايگاه ٢ و ٤ اين مقاله رو نياز داريم.
دوستاني كه مايل به شركت هستن مي تونن به ايدي بنده پيام بدن.


11/1/2024, 6:29:22 AM
Aya Expanse









Expanse 8B Transformers :

from transformers import AutoTokenizer, AutoModelForCausalLM

model_id = "CohereForAI/aya-expanse-8b"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)

# Format the message with the chat template
messages = [{"role": "user", "content": " %prompt% "}]
input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
## <BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|>%prompt%<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>

gen_tokens = model.generate(
input_ids,
max_new_tokens=100,
do_sample=True,
temperature=0.3,
)

gen_text = tokenizer.decode(gen_tokens[0])
print(gen_text)










10/31/2024, 10:51:20 AM
Title:
Advanced Classification of Drug-Drug Interactions for Assessing Adverse Effect Risks of Fluvoxamine and Curcumin Using Deep Learning in COVID-19
———————————————————————
Keywords:
Drug–Drug Interactions; Deep Neural Network; Fluvoxamine; Curcumin; Machine Learning.
———————————————————————
Journal of Infrastructure, Policy and Development


نفر اول پرشده
نفر دوم و سوم و چهارم خالی هست.

مقاله در اخرین ریوایزد خود می باشد.



10/30/2024, 1:11:04 PM
با عرض سلام نيازمند co-author براي مقاله زیر هستيم.

if: 1.2
Paper link: 

تغييرات كامل نسخه نهايي تا يك هفته اينده اعمال ميشه كسي از دوستان تمايل به همكاري داشت به ايدي بنده پيام بدن.



10/30/2024, 7:00:10 AM
Zamba2-Instruct

В семействе 2 модели:






# Clone repo
git clone https://github.com/Zyphra/transformers_zamba2.git
cd transformers_zamba2

# Install the repository & accelerate:
pip install -e .
pip install accelerate

# Inference:
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

tokenizer = AutoTokenizer.from_pretrained("Zyphra/Zamba2-2.7B-instruct")
model = AutoModelForCausalLM.from_pretrained("Zyphra/Zamba2-2.7B-instruct", device_map="cuda", torch_dtype=torch.bfloat16)

user_turn_1 = "user_prompt1."
assistant_turn_1 = "assistant_prompt."
user_turn_2 = "user_prompt2."
sample = [{'role': 'user', 'content': user_turn_1}, {'role': 'assistant', 'content': assistant_turn_1}, {'role': 'user', 'content': user_turn_2}]
chat_sample = tokenizer.apply_chat_template(sample, tokenize=False)

input_ids = tokenizer(chat_sample, return_tensors='pt', add_special_tokens=False).to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=150, return_dict_in_generate=False, output_scores=False, use_cache=True, num_beams=1, do_sample=False)
print((tokenizer.decode(outputs[0])))







10/29/2024, 7:16:51 PM
Title: BERTCaps: BERT Capsule for persian Multi-domain Sentiment Analysis.

Abstract:
Sentiment classification is widely known as a domain-dependent problem. In order to learn an accurate domain-specific sentiment classifier, a large number of labeled samples are needed, which are expensive and time-consuming to annotate. Multi-domain sentiment analysis based on multi-task learning can leverage labeled samples in each single domain, which can alleviate the need for large amount of labeled data in all domains. In this article, the purpose is BERTCaps to provide a multi-domain classifier. In this model, BERT was used for Instance Representation and Capsule was used for instance learning. In the evaluation dataset, the model was able to achieve an accuracy of 0.9712 in polarity classification and an accuracy of 0.8509 in domain classification.

journal:
If: 2.3

نفرات ٢ تا ٤ اين مقاله رو نياز داريم.
دوستاني كه مايل به شركت هستن مي تونن به ايدي بنده پيام بدن.


10/23/2024, 5:13:15 PM
يكي از بهترين موضوعات در طبقه بندي متن؛ تحليل احساس چند دامنه اي مي باشد. براي اين منظور مدلي تحت عنوان Title: TRCAPS: The Transformer-based Capsule Approach for Persian Multi- Domain Sentiment Analysis طراحي كرديم كه نتايج خيلي بهتري نسبت به IndCaps داشته…
10/21/2024, 11:59:29 AM
يكي از بهترين موضوعات در طبقه بندي متن؛ تحليل احساس چند دامنه اي مي باشد. براي اين منظور مدلي تحت عنوان
Title: TRCAPS: The Transformer-based Capsule Approach for Persian Multi-
Domain Sentiment Analysis
طراحي كرديم كه نتايج خيلي بهتري نسبت به داشته است.
دوستاني كه نياز به مقاله تو حوزه NLP دارن مي تونن تا اخر اين هفته داخل اين مقاله شركت كنند.

ژورنال هدف Array elsevier مي باشد.

شركت كنندگان داخل اين مقاله نياز به انجام تسك هايي نيز مي باشند.



10/21/2024, 2:31:42 AM
⚡️ DesignEdit: Multi-Layered Latent Decomposition and Fusion for Unified & Accurate Image Editing

Microsoft представляет DesignEd it!

Github:
Paper:
Project:

4/25/2024, 1:10:10 AM
⚡️ DBRX, a groundbreaking open-source Large Language Model (LLM) with a staggering 132 billion parameters.


Github:
HF:



4/17/2024, 8:25:01 PM
This website is not affiliated with Telegram. Visual content shown here might be copyrighted by rightful owners. No infringement intended.
DISCLAIMER: Infos without tag OFFICIAL posted on website are public, and wo are not responsible for the content on their media. Join or subscribe the info there maybe some risk with you. If you have any issueContact UsPlease!