Wednesday, October 18, 2017 - by

Carnegie Mellon University senior Eric Zhu technically majors in computer science, but he's a true Renaissance man. He's served two years on CMU's Student Senate, spent a year as a resident assistant, made an effort to take at least one humanities course each semester, participated in CMU Mock Trial and has never abandoned his love of classical piano.

For More Information, Contact:

The Relaxed Memory Calculus (RMC) is a novel approach for portable low-level concurrent programming in the presence of the the relaxed memory behavior caused by modern hardware architectures and optimizing compilers. RMC takes a declarative approach to programming with relaxed memory: programmers explicitly specify constraints on execution order and on the visibility of writes. This differs from other low-level programming language memory models, which---when they exist---are usually based on ordering annotations attached to synchronization operations and/or explicit memory barriers.

For More Information, Contact:

We posit that aspects of frameworks, such as inversion of control and the structure of framework applications require developers to adjust their debugging strategy as compared to debugging sequential programs. However, the benefits and challenges of framework debugging are not fully understood, and gaining this knowledge could provide guidance in debugging strategies and framework tool design.

For More Information, Contact:

Planning under uncertainty assumes a model that specifies the probabilistic effects of actions in terms of changes of the state. Given such model, planning proceeds to determine a policy that defines the choice of action at each state that maximizes a reward function. In this work, we realize that the world may have degrees of freedom not necessarily captured in the given model, and that the planning solution based thereon may be sub-optimal when compared to those possible when the additional degrees of freedom are considered.

For More Information, Contact:

Generative modeling, a core problem in unsupervised learning, aims at understanding data by learning a model that can generate datapoints that resemble the real-world distribution.  Generative Adversarial Networks (GANs) are an increasingly popular framework that solve this by optimizing two deep networks, a "discriminator" and a "generator", in tandem.

For More Information, Contact: