Семинары 2026
Семинар НУЛ Искусственного интеллекта для вычислительной биологии
Дата: 11.02.2026
Тема: Graph Neural Networks
Докладчик: Губич Александр Сергеевич, стажер-исследователь Научно-учебной лаборатории искусственного интеллекта для вычислительной биологии
Аннотация: A concise overview of Graph Neural Networks, focusing on the message-passing paradigm, key challenges in architecture design, and state-of-the-art research directions in the field.
Дата: 28.01.2026
Тема: Score calibration
Докладчик: Зорина Евдокия Юрьевна, стажер-исследователь Научно-учебной лаборатории искусственного интеллекта для вычислительной биологии
Аннотация: Accurate identification of peptide-spectrum matches in shotgun proteomics critically depends on proper calibration of scoring functions. Raw identification scores are influenced by spectrum-specific characteristics, which makes them non-comparable across spectra and undermines reliable FDR estimation. Spectrum-specific calibration addresses this problem by transforming raw scores into statistically interpretable p-values. Empirical calibration methods estimate null score distributions through Monte Carlo sampling of decoy peptides, while exact approaches compute these distributions analytically. These approaches demonstrate that spectrum-specific score calibration is essential for improving identification validity and statistical reliability in shotgun proteomics.
Дата: 14.01.2026
Тема: QLoRA: Finetuning 65B Language Models on a Single GPU
Докладчик: Асад Мухаммад, стажер-исследователь Научно-учебной лаборатории искусственного интеллекта для вычислительной биологии
Аннотация: QLoRA is a memory-efficient finetuning method that enables training 65-billion-parameter language models on a single 48GB GPU—without compromising performance. It achieves this by freezing a pretrained model quantized to 4-bit NormalFloat (NF4) precision and injecting small, trainable Low-Rank Adapters (LoRA) into attention layers. Three key innovations make this possible: (1) NF4 quantization optimized for neural weight distributions, (2) double quantization to compress metadata overhead, and (3) paged optimizers that prevent memory spikes during training. Evaluated across 19 NLP tasks and multiple model families (LLaMA, OPT), QLoRA matches full 16-bit finetuning performance—including 99.3% of ChatGPT's score on chat benchmarks—while reducing memory requirements by 16×. This democratizes LLM customization, enabling state-of-the-art finetuning for under $1,000 on accessible hardware.
Нашли опечатку?
Выделите её, нажмите Ctrl+Enter и отправьте нам уведомление. Спасибо за участие!
Сервис предназначен только для отправки сообщений об орфографических и пунктуационных ошибках.