Eye movement patterns
What do scanpaths tell us? How they differ between subjects, tasks and experience. And ho to do machine learning on eye-tracking data.
Everyone can do eye-tracking in the lab. But how about real-world applications? How can we strengthen eye-tracking technology so that it works in naturalistic scenarios, with ambient light, eyeglasses, smudges and dirt on the lenses?
How good is your eye-tracker in terms of tracking rate but also in accuracy and precision? How can you tell? And what to do if the scenario does not allow for a more accurate recording?
Data analysis beyond the heatmap. How do we make sense out of that huge pile of data? Is is actually useful at all?
Thomas C. Kübler
2017 – Today Postdoc at Eberhard-Karls-Universität Tübingen, Perception Engineering group. Innovation Grant Life-Sciences.
2012 – 2017 PhD student at Eberhard-Karls-Universität Tübingen, Perception Engineering group. Title of my thesis: "Algorithms for the Comparison of Visual Scan Patterns".
2012 – 2016 PhD student at Aalen University
"Vision Research" group at the study course ophthalmic optics and at the "Aalen Mobility Perception and Exploration Lab".
2010 – 2012 Master of Science (M.S.), Bioinformatics at Eberhard-Karls-Universität Tübingen
Title of my thesis: "A framework for the online recognition of assistance needs for drivers with visual field defects".
2007 – 2010 Bachelor of Science (B.Sc.), Bioinformatik at Eberhard-Karls-Universität Tübingen
Title of my thesis: "Entwurf und Implementierung eines Algorithmus zur Skotom-Klassifikation für automatische statische Perimetrie". (Design and implementation of an algorithm for visual field defect classification for automated, static perimetry)
Comparison of eye movement patterns. Aggregating gaze data to a cognitive and semantically meaningful level.
Algorithms and methods, devices and setups. How to do it and how not to.
Can I measure that? What's wrong with my data? Why doesn't this work as expected?
So, what is in my data? Visualization beyond the heatmap with focus on the narrative behind eye movement trajectories.
Full list over here.
One of the code examples that Colab provides handles recording of an image via a webcam. The code writes a webcam frame to an image file and displays it. However, it does not directly handle life streaming of the webcam feed. I wanted to perform deep learning-based eye-tracking within Colab using the webcam. That would Read more about Google Colab Webcam streaming and processing[…]
I’ve been searching my files for a while to find the dataset of a replication of Yarbus’ famous experiment. Seems like the webpage of Ali Borji ( http://ilab.usc.edu/borji/Resources.html ) mentioned in the manuscript does not link to the data anymore. However, it’s still available via the direct download link at http://ilab.usc.edu/borji/Yarbus.zip
Ball B-splines Ball B-splines are used to model tube-like 3D surfaces or volumes. You can imagine them as a rubber hull over a sequence of marbles of variable diameter. The concept is almost identical to that of normal B-splines: We fit a polynomial to small segments of the complete curve. By combining multiple piecewise polynimials we Read more about Ball B-splines[…]
This post will be about the first step in both Guiding Vector Tree and Space Colonialization algorithms: Poisson sampling. Both procedurally generate a tree structure by joining either randomly sampled points together or by summing over a randomly ampled set of attraction points. But the straight-forward approach while (nGeneratedSamples < nRequestedSampled) new Sample(Random.value, Random.value, Random.value) Read more about Procedural Tree – Poisson Disk Sampling in 3D (C#)[…]
My doctoral thesis on Scanpath Comparison Algorithms is published, you can grab a free copy at http://hdl.handle.net/10900/74458 or the compressed e-book version for online reading (with slightly less beautiful figures) at my university webpage.
Recently I stumbled across the Procedural World Blog (https://procworld.blogspot.de/). It has a nice post on how to generate trees algorithmically. They use the so-called Space Colonialization algorithm . Basically, a large number of attraction-points is chosen randomly within the volume of the soon-to-be tree crown and each of these points excerts an influence on the Read more about Procedurally generated trees – quick overview[…]
Ever wondered how your eyes perform while playing a videogame? Turns out they might be quite important for you winning the game. Actually, so important that one can even distinguish whether you are a winner based solely on the movement of your eyes!
Saliency is a measure of how strong elements stand out from their surrounding. Highly salient objects are likely to attract an observer’s attention. These regions are usually viewed during the first fixations, within very few seconds or even milliseconds. Are you interested in what people will look at in your image/website/advertisement?