Niklas Bühler
Escaping local optima on a random walk through life.

Table of Contents


🎓 Academic Publications

🩻 Pilot Study for Large-scale Radiograph Pre-training

[2024-04-17 Wed]

irma.png

In recent years, medical datasets have expanded significantly, offering great potential for the development of machine learning applications in the medical field. However, manual labeling of such data is costly and poses a significant bottleneck to their utilization.

To address this issue, self-supervised learning (SSL) exploits the data itself to learn embeddings that can be quickly adapted to downstream tasks as needed.

In this work, we show the suitability of self-supervised learning techniques, specifically masked autoencoders (MAE), to generate such embeddings from a large clinical dataset comprising 12,000 radiograph images from various anatomical regions.

By pre-training a MAE model on producing these high-quality embeddings, the need for labeled data in downstream tasks is substantially reduced. This is evidenced by a linear classifier trained on representations from this MAE model achieving 84.58% top-1 accuracy on bodypart classification when using only 1% of data, marking a 7% relative improvement over fully supervised training.

This pilot study thus establishes the foundation for applying the MAE strategy to our own large-scale real-world radiograph dataset, comprising 700,000 radiograph images, as well as evaluating on more complex downstream tasks in future work.


🧬 Functional Gene Embeddings

[2024-02-08 Thu]

functional-gene-embedding.png

Functional gene embeddings, numerical vectors capturing gene functions, can provide useful representations of genes for downstream analyses. In this project, we extracted functional gene embeddings from transcription data and other genomewide measurements by using principle components, autoencoders and a Variational Deep Tensor Factorization model. We used these embeddings, as well as embeddings from other publications, as gene features in the prediction of genome-wide association study summary statistics for a diverse set of traits and compared their performances.


🧠 Connectome Informed Attention

[2023-02-16 Thu]

connectome-informed-attention.png

We predict tau spreading behavior for Altzheimer’s patients, based on a connectivity map and tau PET scans of the brain.


🧊 Bayesian Deep Learning – A Stochastic Dynamics Perspective

[2023-01-29 Sun]

bayesian-deep-learning.png

This report gives an overview of Bayesian deep learning from a stochastic dynamics perspective by first introducing Bayesian deep learning as well as two important methods for training Bayesian neural networks, and then building upon this fundament by presenting various approaches and variations inspired by stochastic dynamics.


🧬 Interpretable Mechanistic Models for Predicting Tissue-specific RBP Expression

[2022-07-25 Mon]

interpretable-mechanistic-models-notes.png

In this report, we predict expression patterns of different RNA-binding proteins across tissue types. As our results were mostly supported by the literature and experimental data, they could lead to discovery of previously unknown expression patterns of RBPs.


🌆 Case Study – Life Expectancy in Barcelona

[2022-01-23 Sun]

A short case study on what factors might influence life expectancy in the city of Barcelona.


🧮 Formalism 101

[2022-01-04 Tue]

formalism.png


🗣️ Crosslingual, Language-independent Phoneme Alignment

[2021-10-01 Fri]

bachelor-thesis.png

The goal of this thesis is to apply cross-lingual, multilingual techniques on the task of phoneme alignment, i.e. the task of temporally aligning a phonetic transcript to its corresponding audio recording. Three different neural network architectures are trained on a multilingual data set and utilized as a source of emission probabilities in hybrid HMM/ANN systems. These HMM/ANN systems enable the computation of phoneme alignments via the Viterbi algorithm. By iterating this process, multilingual acoustic models are bootstrapped and the resulting systems are used to cross-lingually align data from a previously unseen target language.


🤖 Can Computers Think?

[2021-08-04 Wed]

can-computers-think.png

Human intelligence has been the defining property of the human race for thousands of years. However, with the recent rise of machine learning, amplified by breakthrough results in deep learning, this opinion is cast in questionable light, as the opposing question starts to become more relevant and polarizing than ever before: Will computers be able to think in a way that is equivalent – or even superior – to ours?


🔏 Security Review

[2020-07-30 Thu]

security.png

A review of the most important definitions and results of the Security lecture given at KIT.


🍛 Cooking With Curry: Lambda Poetry

[2020-08-03 Mon]

y-combinator.png

The lambda calculus is a formal system that is used to express computations. Like the Turing machine, it’s a universal model of computation, however it’s much simpler and more elegant than those bulky machines. The following pages are filled with fundamental datastructures and the most important functions operating on them, in the untyped lambda calculus. The datastructures presented differ from ordinary, imperative datastructures, as they are purely functional. That means they don’t describe where and how the data is stored, but rather how functions are applied to that data. Variable names are often chosen as a hint on the value they’re holding, but aren’t elaborated on. Their exact purpose and meaning is left open for exploration.


🌐 Graph Theory Review

[2020-02-20 Thu]

graph-theory.png

A review of the most important definitions and results of the Graph Theory lecture given at KIT.


♾️ Mächtigkeiten, Kardinalzahlen und die Kontinuumshypothese

[2019-12-17 Tue]

infinity.png

Summary of set-theoretical definitions and theorems for my presentation (which was done on blackboard).


🧫 Computation and Pattern Formation by Swarm Networks with Brownian Motion

[2019-07-22 Mon]

cells.png

Written report and presentation condensing multiple papers about the concept of Swarm Networks introduced by Isokawa and Peper.

Created: 2024-04-17 Wed 15:06