Professional Data Science portrait with Jan Allemann: Chief Revenue Officer (CRO)

Professional Data Science portrait with Jan Allemann: Chief Revenue Officer (CRO)

Our former HSLU and Applied Information and Data Science student Jan Allemann is sharing with us some insights into his everyday working- and study life. Stepping fast ahead in his career, he was alredy promoted to the Chief Revenue Officer (CRO) at Cowa Thermal Solutions AG. Jan is now responsible for all steps in industrializing the product and also for the sales strategy. We also learned what he enjoyed most during his studies.

Shortcuts: InterviewInfo-EventsApplied Data Science Master’s programme InfoContactProfessional Data Science portraits

Jan Allemann
Chief Revenue Officer (CRO) @ Cowa Thermal Solutions AG


#digitalmarketing #dataanalysis #interdisciplinary #staycurious #responsibility
#sustainability #sports

First of all, tell us something about yourself: Which hashtags describe you the best?
#interdisciplinary #staycurious #responsibility #sustainability #teamsport

Tell us a bit more about them.
I have always been more interested in discovering the manifold connections among subject areas than in drilling down to the last detail of a single topic. Although I’ve worked intensively on certain topics over the years, this personal trait is reflected in my curriculum vitae, which documents my work in mechanical engineering, data science and business. Furthermore, I’m also aware of how much specialized experts with a love for detail are in demand, which brings me to my last hashtag: #teamsport. No other place has taught me more about how we as a team can challenge each other and learn.

 

Now let’s talk about your professional life: What do you do at Cowa Thermal Solutions AG?
We are a start-up from Lucerne and will soon be launching an innovative product for thermal energy storage. Since February 2021, I’ve been responsible for all steps in industrializing the product and new also for the sales strategy.

What did you do previously and why did you join Cowa Thermal Solutions AG?
My background is in mechanical engineering. I first worked as a design engineer at Bosch and then joined Zühlke Engineering as a systems engineer after I got my degree. Both were very exciting companies and professions, but as I already mentioned, I’ve always been interested in other areas, such as computer science, general business fields, and renewable energies. That’s how I ended up in the MSc program in data science and the job as a marketing analyst for a software company in Boston. I got my current job through my network at Lucerne University of Applied Sciences and Arts. Cowa was looking exactly for someone with my rather unusual profile, and I didn’t hesitate for a second, as the product, the company, and the challenging position appealed to me.

Tell us about the most exciting thing in your job.
We are a start-up, which is why many processes are still undefined and software decisions are still pending. On the other hand, it means we can build our company by using cutting-edge knowledge and best practices. We are not stuck with legacy systems or outdated processes – a very privileged and exciting place to be in. That’s why – especially in my field around the marketing strategy and sales channels – I strive to be as data-driven and digital as possible.

Which data science skills are especially in demand in your job?
In our production, the key is knowing right from the start what questions we want to put to the data and what it is we want to learn. This way, we can ensure that we collect the right data in the first place and set up appropriate sensors and measurement procedures. On the market side, I’m currently working on our target accounts and using analytics tools to learn which sales arguments generate the most interest.
In addition, the MSc in Applied Information and Data Science also offers a significant number of modules on business model development, customer experience or management. This has given me some very valuable things that I can apply directly every day.

Do you think of yourself more as a techie or as an analyst? Or as a creative genius, management superhero or generalist wiz?
Whew! I guess I would say that I’m a generalist, based on my answers above 😊. But there’s definitely a techie somewhere deep down inside, as I also like to keep myself busy with the latest technical achievements in my private life. However, my broad background and generalist traits tend to dominate everyday professional life. 

What do you remember the most when you look back at your time in the MSc in Applied Information and Data Science program?
A culture shock in the positive sense when I started my semester abroad in Boston. In Switzerland, I had to explain again and again what data science actually is and what exactly it is that I was studying. In Boston, however, I hardly met anyone who didn’t work in the field of data or computer science or who had taken at least a basic course in Python just for the fun of it. That impressed me a lot, and I think that we in Switzerland still have some catching up to do in this respect.

 

What are the biggest challenges in your job at the moment?
Every day I’m confronted with tasks that are completely new to me. Although it’s possible to build a company on a greenfield site, doing so also involves a lot of challenges and difficulties when it comes to deciding on the right principles to pursue. So it’s crucial for me to know when to decide for myself and when to ask for advice from more experienced people. I find this to be very exciting and I’m glad that I’ve acquired the necessary tools to master the challenges while I was studying.

What advice would you have for others starting in the same job?
The following – especially for those with little experience in coding: Stick with it! Even if it may seem painful at first, you’ll notice fascinating things happening eventually.

And finally: What new hashtag are you aiming for in 2022?
#digitalmarketing
We at Cowa are planning to significantly increase our market activities in the coming months – hopefully by using data the way I learned during the program.


Data is the resource of the 21st century!
Register and join us for a free online Information-Event:

Monday, 2 June 2025 (Online, German)
Monday, 11 August 2025 (Online, English)

Many thanks to Jan Allemann for this interesting interview and the insights into your job! 

Data Science: Hackdays Challenge – Run Against Your Predicted Time

Data Science: Hackdays Challenge – Run Against Your Predicted Time

In Switzerland, over 200'000 runners participate in around 550 running events every year. From the absolute professional runner to the hobby runner, everyone is running against themselves and, above all, against the clock. A new component should be added to this competition in the future: Betting. Every runner should be able to bet against his own time. To make this possible, the final times of each runner have to be predicted as accurately as possible.

Table of Contents – Data Science Challenge:
Introduction | Data | Development | Conclusion

Authors: Pascal Humbel, Simon Stäubli, Mark Arnosti and Andreas Kläusli
Challenge owner: Datasport

Introduction

Run against your predicted time__Hackdays_HSLU

The Sport Hackdays Lucerne took place on November 27 and 28, 2021. This Applied Data Science event was organized by the 3rd and 4th semester students of the Master’s programme “Applied Information and Data Science” of the Lucerne University of Applied Sciences and Arts and was a legacy of the Winter Universiade 2021.

With the support of ThinkSport, Swiss University Sports and data innovation alliance, 11 different challenges were addressed. Our team has chosen challenge 1 from Datasport.

Datasport has been in the business of timekeeping since 1983. Through innovative ideas, speed and professionalism, Datasport has grown to become the most important IT service provider for sporting events in popular and mass sport. Datasport is now the leading provider in Switzerland and abroad. Datasport is active in timekeeping in running, cycling, multisport, winter sports, walking and serial runs, among other areas.

The goal of challenge 1 was to develop an individual betting system for long-distance races. In order to realistically estimate times to beat, Datasport needs an algorithm which predicts finishing times in endurance races. The algorithm should be adaptable to different tracks and can learn from previous races.

Data

Datasport provided data from approximately 20 different races over a 10-year period (n = 229’136). The dataset included runner-related variables such as: runner ID, country, gender, year, nationality, as well as variables related to the run itself such as type of run, altitude, distance, run ID, and final time. Further, each run itself was categorized into one of the following categories in order to facilitate comparisons between different runs: gerade, leicht steigend or Berglauf.

 

Development

As a first step, we needed to understand the data. To do this, we considered which variables might be relevant and which might not. The next step was to clean the dataset. Therefore we converted the run times into the same format and deleted times that came from aborted runs. In addition, we added some variables like season, month and time where the run took place. With the cleaned data set, we were able to consider which approach was most appropriate for the modeling.

Figure 1: Runs per registered runners

We then decided to use two different approaches and compared them by the Mean Absolute Error (MAE) in seconds: a general model which takes all of the runner’s data into account regardless of parameters such as race distance or race topography and a more granular model tailored to the race distance.

As for an individual runner the number of participated races, the race itself and the race distances varied significantly across the data set, it was pivotal to control the race distance variable and the number of participated races. For instance, the majority of the runners participated between around 2 and 7 times in races over this 10-year period which results in an unbalanced, right-skewed distribution (see figure).
Another case in point are the race distances ranging from 5 to 25 kilometers and corresponding different race time results. Thus, any model serving to predict an individual runner’s time has to control at least one of these parameters.

After tinkering around with different models ranging from simple linear regression to gradient boosting, we decided to fit a gradient boosting model for both the general approach and the model considering the race distances. Our best shot with the general model resulted in a MAE of 708.75 s (11 min 40 s) and served as a baseline. This model was based on training data consisting of runners who participated in 2 or more races. However, this error margin is clearly too high for a prediction of any sort, even for longer distances up to 25 kilometers.

The second approach intended to answer the following question: How many races a runner should have participated in for an informed prediction? To this end, we created cohorts of runners with 5,10,15,20 and 28 races of distances 5.5-21.1 kilometers, trained a linear regression as well as a gradient boosting model and compared them by the MAE metrics.

This translated into the following observations: First and foremost, the MAE could be reduced to a range between around 50 and 1000 seconds by fitting a linear regression model and between 50 and 700 seconds by using a gradient boosting model. Secondly, at least 15-20 races are required based on which a gradient boosting model is able to predict a runner’s time within a several minutes encompassing time range.

The achieved prediction errors raise the question what accuracy in time prediction is required to build a viable tool which really adds value to the experience in terms of challenge and motivation at running events. It must be stated that the models fitted yield a rather large prediction time window. Like in all sports, to beat a personal best becomes increasingly challenging as the training volume required rises exponentially and is possibly not even beatable as the personal performance threshold is reached. Thus with an increasing performance level the prediction window should decrease as well. The models currently do not account for this challenge. Therefore with an MAE of up to 1000 seconds the current solution may only serve for inexperienced runners, with rather slow finishing times which mostly depend on the form of the day.

Figure 2: Team in action

Conclusion

In summary, we can say that with this data it is difficult to predict the finishing time accurately. There is a lack of data that differentiates one runner with another. As an example, we can take two runners who are both male, 20 years old and from Switzerland.

With our model, we predict the exact same time. However, this is very unlikely because these runners could have faced completely different conditions and show different physical constitutions. Thus, the general model is not accurate enough. If we look at the individual model we have to acknowledge that we have not enough data on a single runner. We realized that we would need about 20 runs from a single runner to make an accurate prediction. Because the performance curve among runners is very large, a tool like this relies on large and standardized data sets. One possibility is to offer this prediction tool only to runners with a certain track record or to integrate external data sources. To appeal to the masses and also with regards to the current data set it seems more realistic to enrich the data set with additional variables about the runners physical constitution.

In order to make better predictions, the profile of the runner must be improved, i.e. more data about a single runner must be available. We see a huge potential here, but information about the training level, amount of training hours, weight, height, physical condition of each runner needs to be collected. This is somewhat crucial as variability in non-professional running seems among all age classes and gender seems to be quite high. Thus the integration of the personal performance data is a central aspect of a tool like this. To make this process as seamless as possible, data could be sourced from the most popular monitoring platforms like Strava, GoogleFit, where most runners already record their performance anyway. This would enable a really personalized prediction which relies on the most current performance data and give important insight about the general fitness level with monitored heart rate and activity level.

 

Another aspect centers around the interpretability of a future model. Machine learning models such as gradient boosting or random forest tend to be more accurate in predictions than a regression or a classification model for instance. However, better accuracy often goes to the detriment of the model’s interpretability. The latter is pivotal for example if Datasport is keen on expanding its business model in the future also to coaching f.e. a runner of a half marathon seeks to cross a marathon’s finishing line as a next objective. In this case, some specific training suggestions based on known model variables which play an important role would come in handy.

Further it is central for Datasport to standardize their time measurement, meaning that all split times are standardized by distance. Also it is advisable to integrate an elevation model of each race held to make events more comparable. This would not only enable predictions on finishing times but rather on individual splits.

A model to predict the personal running time, could add value to Datasport and be made available to the runner within an application. We have shown how this could look like with the help of a first prototype.

Figure 2: Prototypes of solution

This application could be used by inexperienced runners, to set a goal for themselves or by new runners to have an indication of what they could achieve. Finally, we would like to thank all the coaches and especially Datasport for providing the data.

Many thanks to the authors: Pascal Humbel, Simon Stäubli, Mark Arnosti and Andreas Kläusli and the challenge owner Datasport for this very interesting article and for your commitment!


Data is the resource of the 21st century!
Register and join us for a free online Information-Event:

Monday, 2 June 2025 (Online, German)
Monday, 11 August 2025 (Online, English)

Home -> Applied Data Science blog
Home -> Applied Data Science programme HSLU

Professional Data Science portrait with Michael Schmid: Research Associate

Professional Data Science portrait with Michael Schmid: Research Associate

Our former HSLU and Applied Information and Data Science student Michael Schmid gives us some insights into his everyday working life as a Research Associate. He also tells us what he enjoyed most during his studies. As a Research Associate at the OST - University of Applied Sciences of Eastern Switzerland - he oversees research projects that advance the transfer of scientific findings for the Swiss economy.

Shortcuts: InterviewApplied Data Science Master’s programme InfoInfo-EventsContactProfessional Data Science portraits

Michael Schmid
Research Associate @ OST – University of Applied Sciences of Eastern Switzerland

First of all, tell us something about yourself: Which hashtags describe you the best?
#openminded #nerd #conversationalist

Tell us more about them.
I have always considered myself as an open-minded person – I like to philosophize about anything and have no problem with changing my opinion after I’ve discussed an issue in depth. Exchanging ideas with others during my free time has always been important for me, and especially for my current professional life. That’s because I think being able to thoroughly understand a problem a client, partner or friend may face is still the most important step in finding a solution when things get complicated. In today’s professional world, the data and tools we have play an important role in helping us understand complex contexts, making it indispensable to be a bit of a nerd.

 

Now, let’s talk about your professional life: What do you do at Sunrise UPC GmbH?
I’m a Research Associate at the Institute of Modelling and Simulation (IMS), where I work on various projects to make the findings and ideas from our work commercially available in Switzerland.

What did you do previously and why did you join the University of Applied Sciences of Eastern Switzerland?
I started out as a physicist but then switched to industrial engineering, which led me to my current role in tackling the complex issues at the IMS.

Tell us about the most exciting thing in your job.
What I like the most about my job is that it gives me an insight into a wide range of industries. In our projects we create models of complex systems that can include anything from emergency services to patient behavior, construction sites, or compatibility issues concerning employees. This means I have lots of chances to learn exciting things about our society and how it works.

Which data science skills are especially in demand in your job?
Success in my job means being able to grasp abstract content, understand coding, and communicate well. The first two of these help in all my projects to accurately identify a complex problem and work through it by using data. But I think that the ability to communicate is the key when it comes to developing your expertise (even if only when talking with pros) and making what you learned available to others.

Do you think of yourself more as a techie or as an analyst? Or as a creative genius, management superhero or generalist wiz?
I’m a techie with a creative streak, although my professional background indicates that I’m also a bit of generalist.

What do you remember the most when you look back at your time in the MSc in Applied Information and Data Science program?
As a techie, I most enjoyed the subjects that involved math and coding. The variety and insights into the areas and aspects of data science (databases, cloud computing, machine learning, …) have taught me things that are proving to be very useful now.

 

What are the biggest challenges in your job at the moment?
The biggest challenge at the moment is working on such a wide variety of projects in different areas all at once. So it pays to keep a good overview of what’s going on. All the topics and industries of the projects I’m involved in often make me want to become an expert in each field.

What advice would you have for others starting in the same job?
1. Make sure that you become a good listener.
2. Keep honing your ability to drill down to what lies at the bottom of a problem.
3. Learn how to use more than one tool (or programming language) well. 

And finally: What new hashtag are you aiming for in 2022?
I want to improve my coding skills this year (see no. 3 above😉). It doesn’t necessarily have to be something technical, it could even be something relating to an app or web trend, as these topics are becoming ever more current for us.

Many thanks to Michael Schmid for this interesting interview and the insights into your job! 

DATA SCIENCE PROFESSIONAL PORTRAITS:
Decision ScientistInternal Audit Manager/Data Analytics LeaderData Engineer/Data Scientist Analyst/Credit Management OfficeData AnalystSenior ConsultantSpecialist Data AnalyticsControlling Professional/Data Science SpecialistData Science ConsultantChief Digital Officer

Contact us if you have any questions about the degree programme or for individual advice:
Tel.: +41 41 228 42 53 / E-mail: master.ids@hslu.ch


Data is the resource of the 21st century!
Register and join us for a free online Information-Event:

Monday, 2 June 2025 (Online, German)
Monday, 11 August 2025 (Online, English)

MORE INFORMATION TO THE MASTER’S PROGRAMME HERE:
MORE FIELD REPORTS & EXPERIENCES: Professional portraits & study insights
PROGRAMME INFO: MSc in Applied Information and Data Science
CHECK OUT OUR: Generalist profile
FREQUENTLY ASKED QUESTIONS: FAQ

Sports Hackdays 2021 – Challenge: Moneyball goes Football

Sports Hackdays 2021 – Challenge: Moneyball goes Football

The challenge of group 8 was to create appealing and insightful visualisations for football fans and lovers. The visualisation is meant to show data about football players and give a football enthusiast the possibility to compare different metrics about football players. The goal was to create ideas and a prototype that could be implemented in the online newspaper “Watson”, to attract more football fans to their homepage.

Author: Philipp Schaad,
Challenge owner: DatahouseWatson

Table of Contents
1. Introduction
2. Data
3. Ideation
4. Development
5. Results
6. Conclusion

Introduction

The challenge 8 was owned by Datahouse, a Data Science consultancy from Zurich and
Watson, a free swiss online news platform. The goal of the challenge was to develop an
appealing, easy to use, interactive visualisation, where datapoints of football players can be
compared. This gives a potential visitor of Watson the possibility to compare his or her
favourite football players in regard of different metrics.
The fellow participants of the challenge have voluntarily chosen to contribute to this task.
We were a mixed team of data science students in different semesters, one student from
the Swiss Federal Institute of Sport Magglingen and the challenge owners from Datahouse
and Watson. Some of us with a prior knowledge about football and some with less
knowledge about football.

Data

The data for our challenge was given. It’s web-scraped data from different websites about football. The raw data contains 159 variables from about 4000 football players from the top five national leagues in Europe. Those leagues are:
• Ligue 1 from France
• Serie A from Italy
• Bundesliga form Germany
• La Liga from Spain
• Premier League from the United Kingdom
An additional datafile from one of the challenge members was available to add more information. This file contains information about the players monetary market values when they were transferred from one football club to another. This file contains 27’2338 observations and 26 variables. Those two datafiles were the basic, where we could build our ideas upon.

 

Ideation

When we started, we had an initial look at the available variables in the data so we could generate ideas of what we will be able to visualize on the dashboard. As a next step, we started to generate ideas about possible visualisations and comparisons that could be of interest for a Watson visiting football enthusiast.

Sports Hackdays 2021 - Challenge Moneyball goes Football_HSLU
Figure 2: Group 8 discussing variables, Source: https://www.linkedin.com/company/opendatahackdays/

During this process we agreed to build a dashboard. There were many different ideas what could be visualized on such a dashboard. Finally, we have decided on four elements, and all the four elements show different insights about the players.
One element on the dashboard was a similarity analysis between two players, where different metrics are compared. The second element should be a machine learning model, which compares the players and returns the “best player”. And the third element should
become a plot, where the user can choose between two variables and compare players
based on those variables.
We then split the team in different groups to work on the different elements for the
dashboard. I was mostly involved in building the machine learning model.

Development

To develop the solution, our team agreed on using R and building the dashboard as an RShiny application. R is straight forward for implementing fast prototypes and showing first results. There are lots of packages and libraries to make the data cleansing and development faster. Also, R-Shiny applications are easy to implement and can be embedded in existing homepages, such as Watson. To share the code of the different elements of the solution we developed the different elements locally and shared them via Slack chat.

Reason to do so was that not all team members were acquainted with Git and GitHub. To introduce everyone on how to work with Git would extend the available time during the Sport Hackdays. When developing the machine learning model, we exchanged together in the group on what variable should be used as the predictor to determine the “best player”. With our internal knowledge about football, we decided that the market value of a player is a reliable indicator to predict the “best player”. Another discussion was the position on which a player plays. The group internal football experts were unanimous that there is a difference between market values of the players, depending on what position a player performs.

Strikers are usually the best paid and considered the most valuable performers. A lot of players play not only on one position but on different ones. That made it difficult to clearly distinguish between positions a player play. In the dataset, there were around 130 combinations of positions. We decided to declutter that information. Our football experts defined 6 possible positions and combination of positions, based on their domain knowledge. Because there were a lot of missing values for the players market value variable, we joined the second mentioned data file to this column. The second data file contained a lot more information about the players market value. To do so we did a left join but with a fuzzy join, as the names of the players were not always written in the same way. This meant that we had to clean up the merged dataset further after the join to correct errors in the fuzzy join.

To standardise the data, generate more meaningful results and have more control over the
data, we did z-transform the variables into z-scores. After the data was cleaned and ready to be further processed, we decided, that a random forest would be a fitting model to use on our data as we want to predict the players market value. This model is known to be resistant to overfitting, outliers and works also well with smaller datasets. We tired a random forest with all player positions in it as well as a model with 6 random forests, one for each position in the dataset.

While I was busy with preparing the data and train a random forest model, the colleagues were busy with creating the football player similarity analysis, which is going to be a radar plot as well as the scatterplot to compare two variables between different players. After the three elements were ready, we did implement them into our R-Shiny dashboard. Fortunately, Severin, the challenge owner from Datahouse knew R-Shiny very well and was very engaged with the implementation.

Results

After two days and one night of ideation discussions and hacking, we could present our final deliverable. We did build a prototype of a dashboard where football enthusiast can compare players with different approaches.

Sports Hackdays 2021_Challenge_Moneyball goes Football_HSLU
Figure 3: R-Shiny Football Dashboard

In the Figure 3: R-Shiny Football , you can see how the final solution looks. We did style the dashboard in the original Watson colours. The picture shows the similarity analysis, where two players can be compared based on different metrics. Different filters can be selected and weighted. The similarity analysis then shows a radar plot with the top matching players. The result of the second element, the machine learning approach, which was implemented as random forest, predicts the most valuable players. The model is not finally implemented but shows a tendency what variable is having the most weight for determining the market value of a player.

Sports Hackdays 2021 - Challenge Moneyball goes Football_HSLU
Figure 4: Importance Plot

In the Figure 4: Importance Plot, one can see the top 10 most important variables to predict the market value. The most important variable „Cmp“ stands for “passes completed” and “age_new” is the players age this year (2021). The model was able to explain about 85% of the variance in the data, which is a very good result. We assumed, that the variable for the position a player plays would be very important for the model to predict the market value. Apparently, it was not. We did train 6 different random forests, for each position a player could play, one random forest. For reasons not yet known, this model performed much worse and was only able to explain around 25% of the variance in the data. This is considered a bad value and needs further investigation.

Sports Hackdays 2021 - Challenge Moneyball goes Football_HSLU
Figure 5: Scatterplot

The third part on the dashboard is a scatterplot, where the user can choose two metrics, in the example in Figure 5: Scatterplot, the “Expected Goals + Expected Assists per 90 Minutes” compared to the “Goals + Assists per 90 Minutes”. The data is then plotted and some eye-catching, exceptional are highlighted. Because of the simplicity, only a few players are highlighted. Otherwise, the names would clutter the whole plot, and nothing would be readable.

 

Conslusion

After two long, exhausting, full of learning, joyful but also frustration tolerance testing days, I was impressed by what we have achieved. This was only possible thanks to the diverse and motivated team. Of course, the solution is not final yet and needs some more work. The difference between the random forest with all player position and the model with 6 random forests, for each position a separate random forest is a part where we would have to analyse more closely where this difference comes from. Also, the scatterplot could be developed further and more interactive, with a mouse over option to show the players’ names or a search function to find a specific player in the scatterplot.

Overall, I am very satisfied with the prototype of the dashboard, and I believe that this can give some interesting insights to a football enthusiast and attract visitors to Watson. The interdisciplinary team with students with different backgrounds, some with more and some with less knowledge in soccer as well as the strongly committed challenge owners yielded an excellent team to work with. For all the steps we did during our development process, there was someone with the right knowledge to challenge it. Eighter regarding to the data or regarding to football specific questions.

During the whole development of our solution, by far the most time-consuming process was to clean the data. This step is very essential to generate meaningful and insightful results. Good results are also characterized by the fact that they are reproducible and replicable. Therefore, we tried to document and structure the code, also during this short period of time at the Hackdays. Luckily Severin, the challenge owner from Datahouse was with us all the time. He was very dedicated to build a valuable prototype during the Hackdays. He was always available to discuss data related questions and hands on with the implementation of our solution.

For us it was a stroke of fortune to have Severin in the group. I learned a lot of how applied data sciences works. As a non-expert in football, thanks to the other colleagues in the group, I also learned a lot about the nature of football. Furthermore, I realised how important the domain knowledge about football was and it was valuable to have football enthusiasts in our group. This was best seen in the definition and elaboration of the various player positions. This can also be translated to another project. In every data science project, it is important to know the data, understand where it comes from and how it has been generated.

I was very impressed with our final solution and how far we’d come in those 2 days. To finalise and implement the solution it needs some further tuning and analysis, but I am looking forward to seeing it implemented on Watson.

Many thanks to Philipp Schaad for this very interesting article and for his commitment!

Author: Philipp Schaad (philipp.schaad@stud.hslu.ch) Master of Science in Applied Information and Data Science, Lucerne University of Applied Sciences and Arts
Challenge owner: Datahouse, Watson

Contact us if you have any questions about the degree programme or for individual advice:
Tel.: +41 41 228 42 53 / E-mail: master.ids@hslu.ch


Data is the resource of the 21st century!
Register and join us for a free online Information-Event:

Monday, 2 June 2025 (Online, German)
Monday, 11 August 2025 (Online, English)

MORE INFORMATION TO THE MASTER’S PROGRAMME HERE:
PROGRAMME INFO: MSc in Applied Information and Data Science
MORE FIELD REPORTS & EXPERIENCES: Professional portraits & study insights
FREQUENTLY ASKED QUESTIONS: FAQ

Home -> Data Science Blog

Professional Data Science portrait with Niclas Simmler: Decision Scientist

Professional Data Science portrait with Niclas Simmler: Decision Scientist

A new and exciting Data Science professional portrait of a "Decision Scientist" - with Niclas Simmler, our former HSLU and Data Science student. Courageous and cheerful he is not afraid of change, but sees change as an opportunity to escape stagnation. Especially technological changes inspire him. Niclas spends his working day at Sunrise UPC GmbH in the decisioning area or decisioning process. Find out more about Niclas studies and daily work routine below.

Data Science - Decision Scientist Niclas Simmler

Niclas Simmler
Decision Scientist @UPC Sunrise

First of all, tell us something about yourself: Which hashtags describe you the best?
#Curious #Openminded #ExcitedToBeAliveRightNow #CantThinkOfMoreHashtags

Tell us more about them.
I regard myself as an open-minded and tolerant person. I always welcome change and try to keep myself informed about what’s going on. Any change, good or bad, has its reasons – there’s no way we can develop if we stand still. If we compare the technological change in the last few years with what happened in the years before, we have good reason to get excited. And when we consider how technological change has accelerated – and assume with good reason that it will continue to do so, we have an even better reason to stay the course. At the end of the day, it’s our profession that’s the driving force in all this. That’s why I’m #ExcitedToBeAliveRightNow.

 

Now, let’s talk about your professional life: What do you do at Sunrise UPC GmbH?
My job title is Decision Scientist. As it suggests, I work in the field of decisioning – in other words, I’m involved in the decisioning process and am responsible for studying the so-called ‘next best offers’ and ‘next best actions.’ I am, so to speak, the one who makes data-based decisions and operationalizes the models that decide what the customer should do next from the company’s point of view. This function is part of marketing or, in my case, base management. In other words, it’s the department that manages the customer base. This department also has the marketing teams for x-selling (what other products should customers buy from us?), up-selling (which products do we believe can be optimized?) and retention (which products can make customers stay with us?).
A decision scientist, or the work of this person, thus enables customers to make decisions, based on data, as to what they will see in shops or on our website, or what they will experience when contacting our call centers. Of course, the idea is always to present an offer that a) the customer is most likely to accept and b) is best for the company. I work very closely with our internal data science team, which means I’m not only involved in developing new models but also in acquiring new data sources from the company (big data), in thinking up new ways to acquire customers by using data, in defining requirements together with the marketing managers, and in developing software, among many other tasks.

What did you do previously and why did you join Sunrise UPC GmbH?
Previously, I was a Forensic Consultant at PricewaterhouseCoopers (PwC), where I also worked during my Master’s studies at HSLU. My work there mainly involved conducting forensic data analytics and doing e-discovery. Typical projects included fraud, bribery and money laundering offences. After my studies at the HSLU, I felt ready for a change and started looking for jobs in data science. I wanted to find one where I could apply what I had just been learning and that would genuinely make a difference. At Sunrise UPC (I originally did my apprenticeship there), I am the only decision scientist and thus have the opportunity not only to help improve the customer experience but also to contribute my knowledge and enthusiasm in other ways.

Tell us about the most exciting thing in your job.
Above all, there are several very exciting things going on at the moment. On the one hand, Sunrise UPC is currently in the middle of a merger. This means I can play an active role in shaping many of the systems and processes that I need for my daily work. Secondly, we have an extremely short time-to-market. So, customers are likely to quickly get a taste of whatever happens to be churning in our data-obsessed minds. We are able to deliver ideas in days or weeks that will take other companies years. As a telecom service provider, we have a virtually inexhaustible source of data, some of which can be used for marketing. The most exciting thing about it is the creativity that this job calls for – often what may seem like the most absurd idea is the best one that we end up implementing.

Which data science skills are especially in demand in your job?
I work with a wide range of stakeholders. On the one hand, there are the business teams (in my case, classic marketing), the management team, the IT and data engineers, as well as data scientists, among others. The ability to express technical relationships simply and to be the link between all these stakeholders, so to speak, is the most interesting thing for me.

Do you think of yourself more as a techie or as an analyst? Or as a creative genius, management superhero or generalist wiz?
I completed an apprenticeship and got my BA in computer science before doing my MA in data science at HSLU. In my heart I’ve always been and always will be a techie. However, my years of study have taught me that as a techie I can only go so far. Consequently, I think I’ve incorporated all of these qualities to some degree.

What do you remember the most when you look back at your time in the MSc in Applied Information and Data Science program?
That’s easy: My Master’s thesis. Now, this isn’t limited to the thesis itself but includes all the subjects that made it possible for me to write it. I think that the months I invested in the thesis were the most intense and the most exciting ones of my life. My work was very research-oriented and dealt with cutting-edge findings. My thesis also involved writing an article and presenting my ideas at the Swiss Data Science (SDS) Conference 2021.

 

What are the biggest challenges in your job at the moment?
In and during my previous jobs I felt like I was just an employee. In my current job, however, I’m responsible for an entire field that puts me in contact with many departments. There are high expectations on me, and meeting them is not always easy. But that’s precisely what makes it all so exciting.

What advice would you have for others starting in the same job?
Take the job – even if the job title doesn’t yet include data scientist. There are too many data scientist jobs that are not what you may think.

And finally: What new hashtag are you aiming for in 2022?
#HaveFun
In short: You’re done with your studies. Now it’s time to apply what we’ve learned, have fun at work and help to bring about change. After all, we’re in a field that can make a difference for an entire company.

Many thanks to Niclas Simmler for this interesting interview and the insights into your job! 

DATA SCIENCE PROFESSIONAL PORTRAITS:
Decision ScientistInternal Audit Manager/Data Analytics LeaderData Engineer/Data Scientist Analyst/Credit Management OfficeData AnalystSenior ConsultantSpecialist Data AnalyticsControlling Professional/Data Science SpecialistData Science ConsultantChief Digital Officer

Contact us if you have any questions about the degree programme or for individual advice:
Tel.: +41 41 228 42 53 / E-mail: master.ids@hslu.ch


Data is the resource of the 21st century!
Register and join us for a free online Information-Event:

Monday, 2 June 2025 (Online, German)
Monday, 11 August 2025 (Online, English)

MORE INFORMATION TO THE MASTER’S PROGRAMME HERE:
MORE FIELD REPORTS & EXPERIENCES: Professional portraits & study insights
PROGRAMME INFO: MSc in Applied Information and Data Science
CHECK OUT OUR: Generalist profile
FREQUENTLY ASKED QUESTIONS: FAQ

Professional Portrait with Pascal Himmelberger: Internal Audit Manager, Data Analytics Leader

Professional Portrait with Pascal Himmelberger: Internal Audit Manager, Data Analytics Leader

Our former HSLU and Applied Data Science student Pascal Himmelberger is working at the Swiss National Bank as "Manager Internal Audit, Data Analytics Lead". Creativity, innovation and fearlessly plunging into the unknown are some of his strong characteristics. Read more about Pascal Himmelberger's professional life with a look back at his studies with us.

Pascal Himmelberger
Manager Internal Audit / Data Analytics Lead Data Science

@Swiss National Bank

First of all, tell us something about yourself: Which hashtags describe you the best?
#creativity #innovation #exploration #pragmatism #unlockingpotential #goyourway

Tell us more about them. 
– #creativity and #innovation in technology and business have always fascinated me and are things I’m very comfortable with because they call for an explorative and iterative approach. In other words, I like simply trying something out, venturing onto new territory and making continuous improvements.
– However, creating value (#unlockingpotential) also means being pragmatic and having the ability to actually implement and realize things.
– Truly new ways and approaches call for a broad perspective and lots of experience. To innovate, you need to work with unique, knowledgeable and experienced people who will help you go your own way.  

 

Now let’s talk about your professional life: What do you do at the Swiss National Bank?
I am a manager in the internal audit unit, which means I’m responsible for setting up and developing the bank’s data analytics. On the one hand, I help to prepare data and analyses so that the bank can use them for its auditing work; on the other hand, I review and ideally improve its existing data and analytical activities.

What did you do previously and why did you join the Swiss National Bank?
Previously, I worked for various consulting companies in the field of data analytics. I specialized in intelligent automation and forensic data analytics and thus spent a great deal of time on such projects. Before being a consultant, I worked as an IT auditor at a large auditing company, where I learned a lot about various analytics topics, and this in turn further sparked my interest in the field. I joined the SNB because it gave me the rare opportunity to set up and develop the area of data analyses in an exciting institution pretty much from scratch.

Tell us about the most exciting thing in your job.
In addition to the opportunities mentioned above to establish analytics as a core theme at the SNB, I also have the chance to experience all aspects of a truly unique institution. I find it motivating to contribute to the success of an institution with such an important national function.

Which data science skills are especially in demand in your job?
My role currently requires conceptual and technical skills with which to design and carry out data analyses. This means understanding business and process requirements, developing strategies, setting up the environments and pipelines (databases, SQL, Python, R, data management and engineering) for analyzing data, as well as preparing and visualizing the data (PowerBI, Python Matplotlib, R-Shiny) for a specific stakeholder group.

Do you think of yourself more as a techie or as an analyst? Or as a creative genius, management superhero or generalist wiz?
My interests are very broad. I find many topics around data analysis, innovation, management and technology very exciting and therefore think of myself rather as a generalist.

What do you remember the most when you look back at your time in the MSc in Applied Information and Data Science program?
The degree program had a lot of exciting content thanks to its broad range of subjects, and it gave me as a student lots of room to pursue my interests. I often found data engineering topics relating to infrastructure especially appealing and liked the challenge and excitement of building a pipeline, ideally an automated one, for pre-processing and analyzing data. The additional challenge of managing big data (with its variety, speed and volume) added another level of excitement. The Modern Data Engineering in the Cloud course was definitely a highlight. 

 

What are the biggest challenges in your job at the moment?
Introducing an entirely new topic in an organization from the very beginning takes a lot of time and thought. Often there’s no clear “right” or “wrong.” So, the question becomes more a matter of what makes the most sense for the requirements and current situation – and discovering what this involves is not always trivial!

What advice would you have for others starting in the same job?
Contribute what you know and have the courage to question the status quo.

And finally: What new hashtag are you aiming for in 2021?
#keeplearning

Many thanks to Pascal Himmelberger for this interesting interview and the insights into your job! 

Contact us if you have any questions about the degree programme or for individual advice:
Tel.: +41 41 228 42 53 / E-mail: master.ids@hslu.ch


Data is the resource of the 21st century!
Register and join us for a free online Information-Event:

Monday, 2 June 2025 (Online, German)
Monday, 11 August 2025 (Online, English)

MORE INFORMATION TO THE MASTER’S PROGRAMME HERE:
MORE FIELD REPORTS & EXPERIENCES: Professional portraits & study insights
PROGRAMME INFO: MSc in Applied Information and Data Science
CHECK OUT OUR: Generalist profile
FREQUENTLY ASKED QUESTIONS: FAQ

Professional Portrait with Stephan Wernli: Data Engineer / Data Scientist

Professional Portrait with Stephan Wernli: Data Engineer / Data Scientist

A new exciting job description of a "Data Engineer / Data Scientist", with Stephan Wernli, our former HSLU and Data Science student. His credo: Continuous learning, especially in information technology, is essential. In this dynamic environment, the willingness to constantly develop is the most important quality. Stephan spends his working day at Endress+Hauser Flow. Find out more about Stephan Wernli's Data Science studies and his work life below.

Stephan Wernli
Data Engineer/Data Scientist @Endress&Hauser Flow

First of all, tell us something about yourself: Which hashtags describe you the best?
#lifelonglearning #analytics #connected

Tell us a bit more about them.
It’s rather obvious that my hashtags are in the technology and analytics fields. I believe that life-long learning is essential, especially in information technology. In such a dynamic environment, the willingness to constantly develop yourself is probably the most important quality you’ll need. 

Another aspect is that technology is in a permanent state of flux and thus offers lots of opportunities to tackle new challenges. And because of all the technological progress, we constantly need to rethink the solutions we have and revise them if necessary. Only those who manage to obtain up-to-date information can manage complex analytical problems effectively and efficiently. These are the reasons why I decided to pursue a career as a data scientist. I always want to be confronted with new challenges in the hope of finding an even better solution. By constantly developing myself, building an excellent network and using the right analytical tools, I hope to complete projects successfully and sustainably.

Now let’s talk about your professional life: What do you do at Endress+Hauser Flow?
At Endress+Hauser Flow, I’m part of the Industrial Engineering team, which manages, monitors and improves production processes. I have the dual role of data engineer and data scientist, which means I have a very broad range of activities spanning across much of the digital value chain. As a data engineer, I prepare and pre-process data from a range of sources. The aim here is to make the data available for downstream analytical processes and BI activities. In my role as a data scientist, I continue along the process by taking the prepared data and looking for whatever findings I can derive from it. In other words, I’m responsible for a large part of our information processing – from preparing the data, processing it via data pipelines and then modelling it in the data warehouse for analytical purposes. 

Basically, however, I focus on the analytical part and how we can implement machine learning in the company. My goal is to develop a reliable and resilient basis for the analytics of Endress+Hauser Flow with which it can cover its needs. My activities also include advising managers about their projects. In addition, as a core team member, I have a global responsibility for the data science platform and work with my colleagues at group level to coordinate what architecture and solutions we need to ensure a reliable result. I see myself more as a conceptualizer and implementer than as a consultant because my expertise is mostly in analytics and technology.

 

What did you do before and why did you joined Endress+Hauser Flow?
Before I started working in data science, I got my Bachelor’s degree in engineering and management, with a major in supply chain management, at the University of Applied Sciences and Arts Northwestern Switzerland (FHNW). Already during my studies, I noticed that I especially liked working on mathematical and analytical problems. 

After my studies, I worked as a project manager for digitalization and logistics projects in a medium-sized company. When I decided to do the Master’s program in Applied Information and Data Science, it was clear that this would also involve a career change. 

I joined the Endress+Hauser Group because the company stands for innovation and change, values that I also strongly believe in. It’s important to me to inspire people with new technical solutions and to create value. The Endress+Hauser Group is internationally known as a leading company in the field of measurement technology and offers its customers the possibility to monitor and control their processes. These services are possible only by collecting and processing large amounts of data across all areas of the company, from production all the way to how customers actually use the product. By applying analytical methods, the company can continuously improve its products and processes.

Tell us about the most exciting thing in your job.
Basically, I like the range of challenges I face and the opportunity to master them by using digital technologies in a team. Each project offers us a chance to invent and choose the best problem-solving strategy.

Whether it involves optimizing production methods, processing sensor values, modelling complex process or using the process mining method to do calculations based on MES data, I can independently realize new ideas daily and apply a wide range of algorithms to solve problems. However, the thing I value most is actively developing the central data processing platform, which allows me to make a lasting contribution to the analytical foundation of the Endress+Hauser Group.

Which data science skills are especially in demand in your job?
As I am mainly involved in design and implementation, I would say that technical and analytical skills are the crucial ones in my job. But let’s not forget that when working in a company you’re always also part of a social as well as a technical system, something we should never lose sight of.

Data science is such a broad discipline, and it’s continuously developing and breaking new ground. It’s therefore all the more important to always question whether your solution strategy is the best one for the company and the customer.

Do you think of yourself more as a techie or as an analyst? Or as a creative genius, management superhero or generalist wiz?
I would describe myself as an absolute techie. My focus is clearly on designing and building analytical constructs and pipelines. These make it possible to manage complex analytics and machine-learning tasks, and to automate and monitor them.

What do you remember the most when you look back at your time in the MSc in Applied Information and Data Science program?
What impressed me most was the range of application fields and the diversity in the field of data science itself. It was only during my studies that I realized how varied data science actually is – be it 

through my fellow students themselves, who came from very different fields, or through the use cases and modules. I also want to mention the strong commitment of the lecturers. And I came to appreciate the exemplary organization of the program and the freedom I had to plan my studies around my individual needs.

What are the biggest challenges in your job at the moment?
Even though data science is getting a lot of attention, it’s still a young discipline in companies. 

Its structures and processes are not yet fully established, and we still have to overcome some hurdles. However, this challenge also offers a lot of room to explore new options and makes it possible to significantly shape a company culture in the long term. My goal is to come up with the best possible solutions for my employers so that they have a reliable basis for moving forward. Furthermore, as I already mentioned, data science is a holistic discipline that has its own processes and mechanisms, ones that must now be integrated into the company in order to harness the potential of analytics.

What advice would you have for others starting in the same job?
I don’t have any advice that covers everyone’s needs. Every company struggles with its own problems during the various phases of its existence. As part of digitalization and analytics, data science is not yet a finished project; instead, we should think of it as a process. It’s thus important to adopt a holistic approach and to know which steps to take in the process. 

So, try to build a network in the company and convince others to support your projects. Only when use cases are actually implemented will we be able to notice the value we were able to create. Here, the ability to assert yourself and persuade others is just as important as your ability to analyze things. It’s not easy to build a digital ecosystem, and there’s no optimal solution that you can implement to serve as a standard. You therefore have to really understand what the company needs and what’s involved in taking the next step, and then implement this and make it become a permanent part of its value chain

And finally: What new hashtag are you aiming for in 2022?
My goal for 2022 is simply to keep learning, to get new projects and to provide my company with the best possible technical solutions. I also want to take data science from the lab and integrate it into production, thus making the value contained in information become visible. For this reason, I’ll stick with my choice of hashtags for now.

Many thanks to Stephan Wernli for this interesting interview and the insights into your job! 

 

Contact us if you have any questions about the degree programme or for individual advice:
Tel.: +41 41 228 42 53 / E-mail: master.ids@hslu.ch


Data is the resource of the 21st century!
Register and join us for a free online Information-Event:

Monday, 2 June 2025 (Online, German)
Monday, 11 August 2025 (Online, English)

MORE INFORMATION TO THE MASTER’S PROGRAMME HERE:
MORE FIELD REPORTS & EXPERIENCES: Professional portraits & study insights
PROGRAMME INFO: MSc in Applied Information and Data Science
DOWNLOAD BROCHURE: MSc in Applied Information and Data Science Brochure
CHECK OUT OUR: Generalist profile
FREQUENTLY ASKED QUESTIONS: FAQ

Contact us if you have any questions about the degree programme or for individual advice:
Tel.: +41 41 228 42 53 / E-mail: master.ids@hslu.ch

Watson Chatbot Challenge by IBM

Watson Chatbot Challenge by IBM

Conversational Artificial Intelligence (AI) is no longer science fiction, but an increasingly mainstream capability with which consumers interact daily in their homes, workplaces, and on the go. Usually known as bots, chatbots, or virtual assistants, this conversational AI makes up a crowded and confusing enterprise market, leading buyers with many "bot" versions that may not talk to each other effectively.

Watson Assistant is IBM’s virtual assistant solution that allows users to interact with business systems using natural human language. IBM has married a technically robust conversational platform with developer and line-of-business-friendly tools with the breadth of the broader Watson portfolio. Enterprises can build and train the AI solution to serve a wide range of use cases across applications, devices, and channels.

Supported by IBM Switzerland the Watson Chatbot Challenge is a new inter-university course format with Fachhochschule Nordwestschweiz (FHNW), Hochschule Luzern (HSLU), University St. Gallen (HSG) and Hochschule Zürich (ZHAW). The course which also involves multiple industry partners was started in the current summer semester. The objective is that teams of students acquire extensive knowledge in conversational AI, conversational design, AI design thinking and dialogue systems by solving real business cases based on IBM AI technology.

 

The course aims are to design industry specific conversational use cases and implement them using state-of- the-art frameworks of IBM Watson Assistant. The students learn conversational design, natural language processing (NLP) in general and specifically natural language understanding (NLU) and generation (NLG) as well as dialogue design.
Furthermore, the students can get a glimpse into machine learning and knowledge engineering depending on the group project requirements and students preferences.

As an example, one of the use cases has been set by Skyguide, an air navigation service provider which manages and monitors Swiss airspace. Correct usage of phraseology, speech timing and clear communication is extremely important for air traffic controllers (ATCO’s). Thus, training for the role is resource intensive requiring a high number of personnel hours. A well-developed chat bot assistant as a training partner could be a useful tool to reduce training costs with added benefits of allowing trainees to train whenever they want while also adding technical oversight of speech pacing and clear pronunciation that may be difficult for human trainers to consistently monitor.

Several teams took the challenge to build such an assistant and the team of students from Hochschule Luzern which comprised Tracey Etheridge, Nandor Babina, Matteo Karten, Valdrin Arifi, and Moreno Gasser won best Skyguide chatbot for their detailed design and comprehensive incorporation of useful features including text to speech and a web site that included performance monitoring of trainee performance.

For this first prototype design they looked at interactions between ACTO’s and pilots during the landing and take-off phases. A chat bot was designed that introduced 4 different aircraft that needed to be guided with appropriate instructions. The chat bot design includes the recognition of input from the trainee as intents and entities, i.e. words and synonyms that are key to identify a specific intent. The team then created corresponding dialog models that recognise if the correct aircraft is being corresponded with and that the required phraseology is present (e.g. SWISS 287 Descend to 5000 feet and reduce to 220 knots). A challenging but successfully implemented aspect was to ensure that a full dialog flow in the correct order was completed for each aircraft while also allowing for flexibility to communicate with different aircraft in varying order.

“We are very grateful to IBM and Skyguide for the opportunity to practically apply the knowledge about Machine Learning, AI and NLP that we have learned through our course” states the Hochschule Luzern student team. “A particularly challenging point was tracking the conversation points of the aircraft and what instructions have already been given by the trainee. The IBM team was extremely helpful to discuss these issues with and provided advice about dialog structure and technical aspects that greatly assisted with the successful implementation.

 

Finally Lars Mallien, who is running the program from IBM concludes: “The Watson Chatbot challenge has been a great success. Not only have the students shown that they have learnt the concepts and logics of building a chatbot. Moreover I am astonished by the variety and results of the presented chatbots. In every use case the students have shown that there are various ways to solve a business problem in a very distinctive way. I delighted with the great work provided by the students which more than once over performed the assigned task. I hope the students have enjoyed the classes the way I did and I am looking forward for the next edition in spring 2022.”

14 teams with total 79 students and faculties from five universities have participated in the program. Besides ATC Speech (Skyguide), also Sika, AKB, Crealogix and BAG had sponsored the specific industry challenges. The winning teams will be invited to a special tour at the Zurich Research Lab. The cloud infrastructure for the teams was sponsored thru an IBM cloud credit university award.

Author: Etheridge Tracey, student of the Master of Science in Applied Information and Data Science


Data is the resource of the 21st century!
Register and join us for a free online Information-Event:

Monday, 2 June 2025 (Online, German)
Monday, 11 August 2025 (Online, English)

MORE INFORMATION TO THE MASTER’S PROGRAMME HERE:
MORE FIELD REPORTS & EXPERIENCES: Professional portraits & study insights
PROGRAMME INFO: MSc in Applied Information and Data Science
DOWNLOAD BROCHURE: MSc in Applied Information and Data Science Brochure
CHECK OUT OUR: Generalist profile
FREQUENTLY ASKED QUESTIONS: FAQ

Contact us if you have any questions about the degree programme or for individual advice:
Tel.: +41 41 228 42 53 / E-mail: master.ids@hslu.ch

Deep learning to diagnose malaria – Graduates develop the Malatec app

Deep learning to diagnose malaria – Graduates develop the Malatec app

A quick and inexpensive malaria diagnosis via an app should be possible! This is what our three recent Graduates of the Master’s programme “Applied Information and Data Science” say to each other. Therefore and for this purpose they have decided to develop the Malatec smartphone application. The diagnosis is to be made by using an inexpensive, 3D-printed microscope (under 5 CHF), which will be developed for this purpose.

The Malatec development team: Daniel Barco, Stephana Müller, Silvan Burnand & Benjamin Hohl

As a mosquito-borne infectious disease found mostly in the southern hemisphere, Malaria often causes high fever, or even death if it’s wrongly treated or left untreated. Pregnant women are especially vulnerable and have a higher risk of death. Children below the age of five are the hardest hit, accounting for around two-thirds of all deaths (World Health Organization WHO, 2019). The WHO estimates that around 228 million malaria cases occurred worldwide in 2018, 213 million (93%) of them on the African continent (WHO, 2019). The annual case numbers roughly match the combined population of the UK, Germany and France, resulting in a significant negative impact on the health of the affected population, economy and development of the affected regions. Malaria eradication: what we need to do:

Algorithm for detecting malaria directly

Malatec Approach – proof of concept in action


There are two methods for diagnosing a malaria infection. The first one uses a rapid test that can detect parasites through a chemical reaction with the blood. But it tells you only whether the person is infected; it doesn’t tell you anything about the stage of the infection. In the second method, a doctor takes a blood sample and either examines it under a microscope or sends it to a lab for a diagnosis. During the process, the malaria parasites (plasmodia) in the blood sample are dyed for better detection and then counted manually, a step that’s not only labor-intensive and error-prone but also requires educated and trained specialists who may be in short supply in some regions.

So, how could a malaria test work on a smartphone? Well, it also relies on dyed blood samples to detect plasmodia, the same as a lab. However, the parasites are not detected and counted by a person using a microscope but by an object-recognition algorithm that is based on a neural network.

A mobile malaria diagnosis that relies on algorithms has many advantages over conventional rapid malaria tests. Firstly, an app diagnosis makes it possibly to store patient data centrally and track outbreaks geographically. This means that local health facilities can prepare for an outbreak by stocking up with medicines, medical staff and materials. Secondly, an app-based malaria diagnosis is more accurate than a rapid test and delivers results faster than a lab. App-based malaria diagnosis can determine the exact number of parasites and pathogen types, information that then helps doctors to choose the right treatment.

The potential of this new method lies in how widely it can be used in the future. And here’s the best part: mobile microscope hardware plus the right algorithms can be designed to diagnose other illnesses as well, such as tropical diseases for which there are currently no rapid tests.

Developing a prototype and the next steps

In developing a Malatec prototype, the team relies on various partners who want to use data science for a good cause. The Master of Applied Information and Data Science program was able to support the Malatec team with the purchase of malaria blood samples that are needed for developing the prototype, and the Institute of Pathology at the Lucerne Cantonal Hospital has agreed to analyze them. The data from the analysis results serves as basis or “ground truth” of the algorithm. A search for partners to finance the prototype as well as for donors of a microscope is currently underway. Malatec’s ambitious goal is to develop a cost-effective, mobile and accurate malaria diagnosis system and then bring it to the African market.



We want to thank the Malatec team for sharing these interesting insights into their project! Have a look and discover Malatec`s professional portrait & interview



Interested? If you want to know more about this project, please contact the team at ai.malatec@gmail.com !
We want to thank the Malatec team for sharing these interesting insights into their project! Check out here as well the Malatec professional portrait and exciting interview.

Authors: Daniel Barco, Stephana Müller, Silvan Burnand & Benjamin Hohl
https://malatec.ch/ About Malatec: https://malatec.ch/about-us/
References: World Health Organization. (2019). World Malaria Report 2019. Retrieved on 21 September 2020 from https://www.who.int/publications/i/item/world-malaria-report-20

Contact us if you have any questions about the degree programme or for individual advice:
Tel.: +41 41 228 42 53 / E-mail: master.ids@hslu.ch


Data is the resource of the 21st century!
Register and join us for a free online Information-Event:

Monday, 2 June 2025 (Online, German)
Monday, 11 August 2025 (Online, English)

MORE INFORMATION TO THE MASTER’S PROGRAMME HERE:
MORE FIELD REPORTS & EXPERIENCES: Professional portraits & study insights
PROGRAMME INFO: MSc in Applied Information and Data Science
DOWNLOAD BROCHURE: MSc in Applied Information and Data Science Brochure
CHECK OUT OUR: Generalist profile
FREQUENTLY ASKED QUESTIONS: FAQ

Contact us if you have any questions about the degree programme or for individual advice:
Tel.: +41 41 228 42 53 / E-mail: master.ids@hslu.ch

Professional Portrait with Carmen Moreno: Analyst – Credit Management Office

Professional Portrait with Carmen Moreno: Analyst – Credit Management Office

Find out about interesting facts relating to the job portrait of an "Analyst - Credit Management Office" with Carmen Moreno. Our former HSLU and Data Science student - multi-talented and always eager to try new things - spends and experiences her day-to-day work at Julius Baer. Read more about Carmen's interesting work life and challenges below.

Carmen Moreno
Analyst – Credit Management Office @Julius Baer

First of all, tell us something about yourself: Which hashtags describe you the best?
#experimenter #kitesurfer #explorer #reinventyourself

Tell us a bit more about them.
I love trying out new things: Whether it involves taking ceramic classes, learning calligraphy, skydiving, or kickboxing, I’m always looking for new feelings and experiences that help me grow. But you’re also likely to find me by the water, because whenever I can I go looking for some wind for my kite, or I go diving to explore a new underwater universe. I like different kinds of water sport. But I’m especially keen on kitesurfing and have been making steady progress. There are no prescribed limits: You define your boundaries and how strong you want to be in that domain. And for that I have travelled to a lot of exotic places. It’s hard to describe why the sea is so important to me. It just gives me a great feeling. I’m fortunate that I can also practice this sport on the various lakes in Switzerland, surrounded by the Alps.

 

Now let’s talk about your professional life: What do you do at Julius Baer?
I’m a member of the Credit Management Office & Controls team, which is responsible for internal controls of the global credit business. This means I’m involved in acquiring, processing and analyzing data and then communicating the results to senior management. This information then flows into various reports before it gets passed on to our teams and managers. Our analysis forms the basis of important decisions. On the other hand, the lending business at JB calls for a deep understanding of markets because of the special cases we handle. I’ve had the job for eight months now and still feel like a total newcomer! Luckily, I’m in a great team that’s very supportive.

What did you do before and why did you join Julius Baer?
I previously worked in the marketing department for a renowned brand in the food sector, where I was largely involved in project management. But I missed working more with numbers and making decisions based on data.
I am not yet a financial expert, but I found the bank to be very attractive because of its complexity and the global impact of its business. Here, having the right data is the key to making good decisions – especially when considering the business volume that JB handles. That’s why I find banking to be such an interesting sector. I didn’t expect to get such an exciting job, given my modest background in finance. But somehow it all worked out. I’m very grateful to my current boss who believed in me.

Tell us about the most exciting thing in your job.
There’s so much to learn, not only in the technical field but also in the lending business for very wealthy individuals and companies. The lending business is much more complex and interesting than I ever could have imagined. Analyzing the data involves so much, and you have to be very, very precise. Priorities can change quickly, so you need to be flexible and open to changes. But that’s exactly what makes it so exciting for me – especially when there’s a good team to support you.

Which data science skills are especially in demand in your job?
I think that automating reports and improving data quality are very important for us. In day-to-day business, SQL skills, IT skills, VBA and sometimes Tableau are particularly in demand.

Do you think of yourself more as a techie or as an analyst? Or as a creative genius, management superhero or generalist wizard?
I think I fall somewhere in between an analysis geek and a management superhero. I enjoy analyzing data and information. But I also need a change once in a while. So, managing small projects provides some variety and lets me exchange ideas with others. I need that to stay motivated.

What do you remember the most when you look back at your time in the MSc in Applied Information and Data Science program?
First of all, I want to mention that the lecturers were hugely enthusiastic and truly passionate about the various fields of data science, which was really contagious. I probably enjoyed working on group and individual projects the most; in other words, I liked anything that involved practical work. The lecturers gave us a lot of freedom in choosing the topics and tools, so I had the opportunity to develop in the direction that interests me. That’s important because the professional field of data science is huge. My studies also sparked a fascination for data visualization, and I tried to use visualization techniques and tools whenever I could.

What are the biggest challenges in your job at the moment?
My biggest challenge at the moment is to broaden my knowledge of finance and credit management, which is where I’m weaker right now in relation to my technical skills. So that’s what I’ll probably be focusing on the most over the next 8 to 12 months. But it’s an exciting field, and my team motivates and helps me a lot, so I’m enthusiastic about tackling this challenge. In time, when I have more experience and confidence, I’ll probably take on more complex process digitization projects and work with digital transformation in general.

What advice would you have for others starting in the same job?
First of all: If you want to apply fancy ML or DL methods, this is not the job for you. The position is more geared to data analysts. I discussed this with colleagues and believe that there is a shared awareness, at least in the financial sector, that we need digitalizing processes in general and have to further develop our data analysis capabilities. In some cases, you can actually use ML to improve data quality, build risk models and identify potentially fraudulent transaction analysis tools. But even in these cases, using ML is only a small part of the fields of activity. Proper stakeholder management, communication skills, project management as well as data sourcing, cleansing, engineering, etc. make up the lion’s share of the work.
If you want to work in this field, you need to be inquisitive, open-minded and perhaps a little brave to get involved in an area that was previously foreign to you. But if you’re passionate about your work and like what you do, you’ll be OK no matter what! The business environment needs data innovators, but be prepared to challenge yourself and stick with what you’re doing.

And finally: What new hashtag are you aiming for in 2021?
I would like to further develop my data visualization skills, which means I’ll probably aim for a certification in Tableau or Power BI, or both. I’ll try to do it in my current position, but if that’s not possible, I may volunteer for projects with a non-profit organization.

Many thanks to Carmen Moreno for this informative interview and the interesting insights into your job! 

 

Contact us if you have any questions about the degree programme or for individual advice:
Tel.: +41 41 228 42 53 / E-mail: master.ids@hslu.ch


Data is the resource of the 21st century!
Register and join us for a free online Information-Event:

Monday, 2 June 2025 (Online, German)
Monday, 11 August 2025 (Online, English)

MORE INFORMATION TO THE MASTER’S PROGRAMME HERE:
MORE FIELD REPORTS & EXPERIENCES: Professional portraits & study insights
PROGRAMME INFO: MSc in Applied Information and Data Science
DOWNLOAD BROCHURE: MSc in Applied Information and Data Science Brochure
CHECK OUT OUR: Generalist profile
FREQUENTLY ASKED QUESTIONS: FAQ

Contact us if you have any questions about the degree programme or for individual advice:
Tel.: +41 41 228 42 53 / E-mail: master.ids@hslu.ch

fh-zentralschweiz