Planning under uncertainty assumes a model that specifies the probabilistic effects of actions in terms of changes of the state. Given such model, planning proceeds to determine a policy that defines the choice of action at each state that maximizes a reward function. In this work, we realize that the world may have degrees of freedom not necessarily captured in the given model, and that the planning solution based thereon may be sub-optimal when compared to those possible when the additional degrees of freedom are considered.

For More Information, Contact:

In this talk, we highlight two challenges present in today's deep learning landscape that involve adding structure to the input or latent space of a model. We will discuss how to overcome some of these challenges with the use of learnable optimization sub-problems that subsume standard architectures and layers. These architectures obtain state-of-the-art empirical results in many domains such as continuous action reinforcement learning and tasks that involve learning hard constraints like the game Sudoku.

For More Information, Contact:

In this talk Qing presents Parallel Logging DB (PLDB), a new in-situ analysis technique for indexing data within DeltaFS. With its design as a scalable, serverless file system for HPC platforms, DeltaFS scales file system metadata performance with application scale. The new PLDB is a novel extension to the DeltaFS data plane, enabling in-situ indexing of massive amounts of data written to a single DeltaFS directory simultaneously, and in an arbitrarily large number of files.

For More Information, Contact:

Mainstream adaptively merges the video stream processing of concurrent applications sharing fixed edge resources to maximize aggregate result quality. Mainstream’s approach enables partial-DNN compute sharing among applications using DNNs (deep neural networks) that are fine-tuned from the same base model, decreasing aggregate per-frame compute time.

For More Information, Contact:

Trying to solve the data riddle purely through the lens of architecture is missing a vital point: The unifying factor across all data is a dependency on time. The ability to capture and factor in time is the key to unlocking real cost efficiencies.

For More Information, Contact:

QuasarDB is a scalable timeseries database that was designed to handle the extreme use cases one can find, for example, in market finance. In this talk we will see a couple of design and implementation decisions that were made to deliver the performance QuasarDB today delivers, especially regarding network communications, memory management and real-time aggregation.

For More Information, Contact:

To keep pace with Moore's law, chip designers have focused on increasing the number of cores per chip rather than single core performance. In turn, modern jobs are often designed to run on any number of cores. However, to effectively leverage these multi-core chips, one must address the question of how many cores to assign to each job. Given that jobs receive sublinear speedups from additional cores, there is an obvious tradeoff: allocating more cores to an individual job reduces the job's runtime, but in turn decreases the efficiency of the overall system.

For More Information, Contact:

Wednesday, October 18, 2017 - by

Carnegie Mellon University senior Eric Zhu technically majors in computer science, but he's a true Renaissance man. He's served two years on CMU's Student Senate, spent a year as a resident assistant, made an effort to take at least one humanities course each semester, participated in CMU Mock Trial and has never abandoned his love of classical piano.

For More Information, Contact:

The Relaxed Memory Calculus (RMC) is a novel approach for portable low-level concurrent programming in the presence of the the relaxed memory behavior caused by modern hardware architectures and optimizing compilers. RMC takes a declarative approach to programming with relaxed memory: programmers explicitly specify constraints on execution order and on the visibility of writes. This differs from other low-level programming language memory models, which---when they exist---are usually based on ordering annotations attached to synchronization operations and/or explicit memory barriers.

For More Information, Contact:

We posit that aspects of frameworks, such as inversion of control and the structure of framework applications require developers to adjust their debugging strategy as compared to debugging sequential programs. However, the benefits and challenges of framework debugging are not fully understood, and gaining this knowledge could provide guidance in debugging strategies and framework tool design.

For More Information, Contact:

Pages

Subscribe to Carnegie Mellon University - Computer Science Department RSS