In order to enable an iCal export link, your account needs to have an API key created. This key enables other applications to access data from within Indico even when you are neither using nor logged into the Indico system yourself with the link provided. Once created, you can manage your key at any time by going to 'My Profile' and looking under the tab entitled 'HTTP API'. Further information about HTTP API keys can be found in the Indico documentation.
Additionally to having an API key associated with your account, exporting private event information requires the usage of a persistent signature. This enables API URLs which do not expire after a few minutes so while the setting is active, anyone in possession of the link provided can access the information. Due to this, it is extremely important that you keep these links private and for your use only. If you think someone else may have acquired access to a link using this key in the future, you must immediately create a new key pair on the 'My Profile' page under the 'HTTP API' and update the iCalendar links afterwards.
Permanent link for public information only:
Permanent link for all public and protected information:
(ETH Zurich), Valerio Zanetti-Überwasser
(T-Systems Schweiz AG)
Data Science and Machine Learning have become relevant in many research areas and industries. The amount of collected data repeatedly breaks before known speed and volume barriers, which creates the need for automated data processing. Before automated data processing can take place, a machine or algorithm has to be trained for the intended task, which can be information search/retrieval, gaining insights or taking actions. The training phase might last long and occupy a large part of the available infrastructure. Especially if it has to be repeated on new incoming data. To minimize infrastructure costs, machine learning workloads tend to be offloaded to specialized hardware.
Machines are trained to take semantic action in specific domains. To act successfully in such a context, machine learning can't solely rely on data and general purpose algorithms, domain models play an important role in generating accurate results. Data Science - which can be seen as a combination of mathematics, heuristics and domain knowledge - helps discovering patterns and regularities in data, which ideally give birth to new models that help understanding the digitized ocean.
What algorithms and computational models are best fit for machine learning at scale?
How scalable are the current implementations of support vector machines and deep neuronal networks?
Which type of hardware is a good match for offloading machine learning workloads: DSPs, ASICs, GPUs?
What deployment models and data flows are most supportive for machine learning applications?
How does machine learning impact resource usage in a HPC cluster?
What can Data Science adopt from HPC and vice versa?
The talk will present a solution that T-Systems has created for Deutsche Bahn to improve the passenger information by predicting the arrival of trains in real time based on the trains' current positions.
It will be shown, how "classical" statistical machine learning approaches can be combined with artificial neural networks to solve the problem. The solution is designed in a way that it can scale horizontally based on an Hadoop based HPC platform.
Furtherly, an outlook on new datatypes and compute approaches in industrial HPC applications will be given.
(T-Systems Schweiz AG)
Keynote presentation: Near Real-Time Optimization of Train Traffic in Densely Used Network Areas at SBBRoom: New York
SBB operates one of the busiest railway networks in the world. In densely used parts of the railway network the planned headway of 2 minutes between trains requires a strict control of train sequence and train velocity to avoid unnecessary stops and additional delays.
A near real-time optimization based on mixed integer programming is used to calculate the optimum solution every 6 seconds. This optimization algorithm is an integrated component of the centralized dispatching system of SBB and in operation since 2013.
Astrophysical simulations have been a constant presence on HPC clusters around the world for many years. The computational power is now so large that the biggest production runs can easily generate datasets of hundreds of TB in just a few days. The challenge to efficiently post-process the data is significant, because the data can no longer fit in memory and the usual domain tools very cumbersome to use. To address this problem, and the problem of “big data” analysis on HPC clusters in general, we have undertaken a project to try and bring together the benefits of HPC with the ease-of-use of Big Data frameworks. We have developed an analysis code built on top of Apache Spark to analyze 200+TB outputs from recent state-of-the-art cosmological simulation run on Piz Daint at CSCS. Spark is used for orchestration of work and collection of intermediate results; highly-optimized domain code is used for the main part of the computation. In addition, we have developed a tool that allows for quick and easy deployment and monitoring of Spark clusters on HPC infrastructure. I will discuss the issues inherent in combining scientific codes with Big Data frameworks and the approaches we used to overcome them, from the perspective of both software and hardware.