Dr. Rinku Dewri
|
October 28, 2011 JGH 316 2:00 - 3:00 PM |
Department of Computer Science University of Denver
http://www.cs.du.edu/~rdewri
Privacy in the Blue Circle
|
Abstract:
Location privacy research has received wide attention in the past few years owing to the growing popularity of location-based applications, and the skepticism thereof on the collection of location information. In this talk, we shall provide an overview of approaches used for location privacy preservation, and explore what implication does a certain form of background knowledge has on location privacy. We argue that location obfuscation can in fact adversely impact the preservation of privacy in certain scenarios, more precisely the "blue circle" knowledge, and demonstrate how perturbation based mechanisms can instead provide a well-balanced trade-off between privacy and service accuracy.
Author Bio:
Rinku is an Assistant Professor in the Department of Computer Science at University of Denver. He got his PhD in Computer Science from the Colorado State University. His research interests include information security and privacy, network risk management, data management and multi-criteria decision making.
|
Dr. Michael Kahn
|
November 11, 2011 JGH 316 2:00 - 3:00 PM |
Associate Professor of Epidemiology Department of Pediatrics University of Colorado Denver
Director of Clinical Informatics Department of Quality & Patient Safety The Children’s Hospital, Aurora, CO
Michael.Kahn@childrenscolorado.org
Data Sharing Across a National Distributed Clinical Research Network
|
Abstract:
As large government incentives accelerates the adoption of electronic medical records (EMR) in the United States, the volume of detailed clinical data captured as part of routine clinical care is growing rapidly, At the same time, there is an urgent need to understand which clinical practices actually produce superior clinical outcomes. These two trends are feeding two contemporary trends in medical research: the development of Comparative Effectiveness Research and Rapid Learning Health Systems. Both activities require large amounts of EMR data from disparate clinical practices to be analyzed for outcomes and quality improvement research.
SAFTINet is one of three national distributed research networks (DRNs) funded by the Agency for Healthcare Research and Quality (AHRQ) awarded to the University of Colorado to create a distributed research network for sharing EMR data. SAFTINet is partnering with four safety net health care facilities in Colorado, Utah and Tennessee to share data using an advanced grid-based computing platform developed by the National Cancer Institute, originally created for sharing experimental research data. For this presentation, I will describe the basic underlying use case for the SAFTTINet DRN, high-level functional requirements and then describe the underlying architecture for sharing detailed clinical data at a national scale. I will highlight current computational challenges that may be the basis for cross-institutional collaborations with the SAFTINet technical team.
Author Bio:
Dr. Kahn is Associate Professor of Epidemiology in the Department of Pediatrics at the University of Colorado Denver; Co-Director of the Colorado Clinical and Translational Sciences Institute (CCTSI); Biomedical Informatics core director for the CCTSI; and Director of Clinical Informatics in the Department of Quality & Patient Safety at The Children’s Hospital. Dr. Kahn holds joint appointments in the School of Medicine, School of Public Health, College of Nursing and Graduate School at the University of Colorado. From October 2009-2011, he was co-chair of the national NIH funded Clinical and Translational Sciences Award (CTSA) Informatics Key Function Committee, which represents the informatics core directors for all CTSA grantees.
Dr. Kahn’s research interests include real-time clinical decision support linked to clinical outcomes monitoring, clinical data warehouses for both operational and retrospective research support, integration of electronic medical records with prospective research, and data quality assessment methods in distributed multi-institutional clinical research networks.
Dr. Kahn received a BS in Biological Sciences and BA in Chemistry from the University of California, Irvine, M.D. from the University of California, San Diego and Ph.D. in Medical Information Sciences from the University of California, San Francisco and was a Visit Research Scholar at the Medical Computer Sciences program at Stanford University. He is board-certified in Internal Medicine. He is a Fellow of the American College of Physicians and an elected Fellow of the American College of Medical Informatics.
|
Dr. Nicholas Shaw
|
November 18, 2011 JGH 316 2:00 - 3:00 PM |
Information Systems Architect V, HP Major, US Army (Retd.)
Doc@DocHarley.com
Internet Protocol (IP) Traceback
|
Abstract:
While research into tracing a packet back to its source (IP traceback) has been on-going since the late 1990s, success in real-world environments has not been achieved. This is due to factors that include the stateless nature of the Internet where a packet can take any route, the lack of validation of sender IP addresses which results in spoofing of IP addresses, systems inserted into the attack path that are the origin of the attack but not the attacker themselves (zombies, stepping stones, reflectors), and operational and management policy issues when crossing ISP network domain. The type of attacks to which IP traceback has been targeted against are flood attacks which consist of Denial of Service (DoS) where one system is used to attack a victim and Distributed Denial of Service (DDoS) attacks where up to millions of systems can be used to attack a victim from around the world. DoS and DDoS attacks are where the attacker floods the victim(s) with sufficient packets to overload system resources and/or consume network bandwidth thus preventing legitimate users from accessing these resources. A DDoS attack is a much more crippling attack than the more simple Denial of Service (DoS) attack primarily due to the number of systems involved. Recent and well publicized examples of DDoS attacks were the attacks launched by the group “Anonymous” against several companies and Amazon.com in December 2010 as part of "Operation Payback". This presentation discusses the past and current state of IP traceback in both IPv4 and IPv6. It discusses zombies, stepping stones, and reflectors along with proactive and reactive approaches to IP traceback and issues related to each. It will touch on privacy laws in the United States as well as Internet Service Provider (ISP) issues and concerns.
Author Bio:
Dr. Shaw is an Information Systems Architect V (Master) with HP's Enterprise Services in Colorado Springs. He is with the Office of the CTO, Automation. He is a retired Army officer with service in the Signal Corps, Air Defense, and Army Science Corps. His last assignment was the U.S. Army Concepts Analysis Agency (USACAA) in Bethesda, MD. Dr. Shaw has two Doctorates, both in Computer Science; one from The Johns Hopkins University and the other from Colorado Tech. He is currently working on a Doctorate in Computer Information Systems from Nova Southeastern University with a concentration in Information Security. His Dissertation is on network security. His research interests are Radio Frequency Fingerprinting (RFF), privacy in Location-Based Services (LBS) and Internet Protocol (IP) Traceback.
|
Mr. Tim Weil
|
January 13, 2012 JGH 102 2:00 - 3:30 PM |
Senior Manager - Information Security Raytheon Polar Services, Centennial, CO
http://www.securityfeeds.com
INCITS Role-Based Access Control
|
Abstract:
The central concept of Role-Based Access Control (RBAC) is that IT permissions are delegated to roles. Users assigned to roles receive the permissions granted to the role. This level of indirection can provide simpler security administration and finer-grained access control policies. Over the past 15 years RBAC has provided a widely used model for security administration in large networks of applications and other IT resources. In 2004, the RBAC model proposed by the National Institute of Standards and Technology (NIST) was adopted by the InterNational Committee for Information Technology Standards (INCITS) as standard INCITS 359-2004. This talk will be a discussion of the recent standards development activities of the International Committee on IT Standards (INCITS) Role-Based Access Control working group. The context of this discussion will be on challenges in designing and implementing robust Identity and Access Management solutions in small scale and enterprise IT environments.
Author Bio:
Tim Weil is a Security Architect with over twenty years of IT management, consulting and engineering experience in the U.S. Government and Communications Industry. Mr. Weil's technical areas of expertise include enterprise security architecture, FISMA compliance, identity management, and network engineering. His work with Role-Based Access Control (RBAC) includes implementations with multiple vendor products (BMC, BetaSystems, IBM) and several years of development efforts with the International Committee for IT Standards (INCITS) working group on Cybersecurity/RBAC standards. His degrees include an M.S. in Computer Science from Johns Hopkins University, and a B.A. in Sociology from Immaculate Heart College. Currently, he is an industry-certified Security and Privacy professional (CISSP), Project Management Professional (PMP), IT Auditor (CISA) and Risk and IS Control (CRISC). Mr. Weil is a Senior Member of the IEEE and has served in several IEEE administrative positions (Chair-DC Section 2009). He currently works as the Senior Manager - Information Security for Raytheon Polar Services (Centennial, Colorado).
|
Simon Mckeown
|
January 20, 2012 JGH 102 2:00 - 3:00 PM |
DaDaFest International Artist of the Year 2010-2011 Teesside University, Computer Animation and Post Production
http://www.simon-mckeown.com/
Motion Disabled
|
Author Bio:
Simon Mckeown graduated from Newcastle Polytechnic with a degree in Fine Art and since then has been working in digital animation and post-production. Simon has worked in many sectors of the media, from television post-production for BBC and ITV, to computer games, delivering high quality work such as the best-selling Driver series. Now a Reader in Animation and Post-Production for Teesside University, Simon has combined his expertise with the digital medium to a passion for exploring human difference, creating innovative and engaging artwork. Currently the most successful of these is ‘Motion Disabled’, the internationally celebrated and critically acclaimed installation.
Abstract:
Motion Disabled is a digital exploration of the bodies - the biological pathologies - of people who are physically different. The work makes use of motion capture, a technique more commonly associated with feature films and computer games, along with 3D animation to create a kinetic connection with the human form - beautiful everyday movements highlighting all the intricacies and uniqueness of each person's physicality. It has been created by recording the physical movements of fourteen physically impaired people with conditions such as Spina Bifida, Cerebral Palsy and Brittle Bones who had their movements recorded - their physical signatures captured in 3D forever.
|
Dr. Aaron Beach
|
January 27, 2012 JGH 102 2:00 - 3:00 PM |
Postdoctoral Researcher – Energy Informatics National Renewable Energy Laboratory Golden, CO
http://www.aaronbeach.com
Show me the Data! How to share private data on Facebook or in the Smart Grid
|
Author Bio:
Dr. Aaron Beach is a researcher at the National Renewable Energy Laboratory in Golden, Colorado. A native of the Denver area, Dr. Beach received his bachelor of computer science from Northwestern University and his PhD in computer science from the University of Colorado. Dr. Beach's past research has included wireless routing protocols, sensor networks, mobile computing applications, search and anonymization algorithms. His current research is focused on real-time modeling and simulation of complex energy systems. In 2007, Dr. Beach founded a company with his PhD advisor Prof. Richard Han focusing on location aware advertising for mobile social networking applications. Dr. Beach came to the National Renewable Energy Laboratory in 2011 and is currently working on a generic data framework to support real-time decision algorithms on top of real performance data from complex energy systems. He is also designing strategies for how smart meter data and its associated metadata can be managed and shared without violating consumer privacy.
Abstract:
Privacy conceptions have recently changed from involving humans exchanging information to computers mining information. The ability of computers to collect, relate and process large and disparate sets of data has enabled previously obfuscated relationships to be inferred. These changes have deprecated existing privacy mechanisms (legal and technological) and motivated many new privacy conceptions and mechanisms. In this talk Dr. Beach will discuss his past experience designing an anonymous API for Facebook applications and his current work on managing large sets of smart meter and utility data (while navigating largely misunderstood privacy and proprietary concerns in the energy community). Dr. Beach will discuss how privacy requirements are modeled and communicated in such systems and how different conceptions of privacy practically relate (or do not relate) to certain real-world privacy problems.
|
Dr. Akihiro Kishimoto
|
|
February 3, 2012 JGH 102
3:00 - 4:00 PM
CANCELLLED DUE TO SEVERE WEATHER
|
Department of Mathematical and Computing Sciences Tokyo Institute of Technology
http://www.is.titech.ac.jp/~kishi/
Large-scale Parallel Best-First Search for Optimal Planning
|
Author Bio:
Dr. Akihiro Kishimoto is an Assistant Professor in the Department of Mathematical and Computing Sciences at Tokyo Institute of Technology. He received the B.Sc. degree from the University of Tokyo in 1997 and the M.Sc. and Ph.D. degrees from the University of Alberta in 2002 and 2005, respectively. His research interests include artificial intelligence and parallel computing. Particularly, he is interested in developing high-performance game-playing programs and planning systems.
Abstract:
Large-scale, parallel clusters composed of commodity processors are increasingly available, enabling the use of vast processing capabilities and distributed RAM to solve hard search problems. I investigate a parallel algorithm for optimal sequential planning, with an emphasis on exploiting distributed memory computing clusters. The scaling behavior of the algorithm is evaluated experimentally on clusters using up to 1024 processors. I show that this approach scales well, allowing us to effectively utilize the large amount of distributed memory to optimally solve problems which require a terabyte of RAM to solve.
|
Dr. Leemon Baird
|
February 10, 2012 JGH 102 2:00 - 3:00 PM |
Academy Center for Cyberspace Research US Air Force Academy Colorado Springs, CO
http://www.leemon.com
Unkeyed Jam Resistance with BBC
|
Author Bio:
Dr. Leemon Baird is Senior Scientist at the Academy Center for Cyberspace Research, at the US Air Force Academy. He has been Professor of Computer Science at the Academy, and founder and CTO of two successful startups. He works on research projects with students and faculty at a number of universities, and is always interested in starting new collaboration projects with others.
Abstract:
It is usually easy to jam a radio signal, so that the two parties cannot communicate. Jam resistant systems make this more difficult, forcing the attacker to use hundreds or thousands of times more power to interrupt the communication. Unfortunately, all existing system for jam resistance required some secret to be shared between the sender and receiver that isn't known to the jammer. These secrets ("keys") can be difficult to manage for large groups of radios, and are impossible for applications like civilian GPS, where the "legitimate user" may also be the attacker. BBC is the first system that can achieve jam resistance without shared secrets. This talk will discuss the new theory of concurrent codes, how BBC uses that to achieve unkeyed jam resistance, mathematical proofs of its security, and how BBC has performed in actual DoD jamming exercises. Finally, open problems will be discussed, including a simple coloring problem that has not yet been solved.
|
Dr. Chris GauthierDickey
|
February 24, 2012 JGH 102 2:00 - 3:00 PM |
Department of Computer Science University of Denver
http://www.cs.du.edu/~chrisg
How to play peer-to-peer trading card games while preventing your opponents from cheating?
|
Author Bio:
Dr. Chris GauthierDickey is an Assistant Professor in the Department of Computer Science at the University of Denver. He received his Ph.D. from the University of Oregon where he was the recipient of a National Science Foundation Graduate Research Fellowship for research on scalable and cheat-proof peer-to-peer protocols for large-scale interactive applications. In other words, his research centers around preventing cheating while providing network and systems support for multiplayer games. He also serves as a program committee member of ACM NetGames, the premier venue for research in this area, and IEEE Local Computer Networks, which is in its 37th year running. In addition to traditional computer science research in games, he also works with the development of humane games to help in the rehabilitation of people who have had traumatic brain injuries and also with patients that have undergone total knee replacements.
Abstract:
In trading card games (TCGs), players create a deck of cards from a subset of all cards in the game to compete with other players. While online TCGs currently exist, these typically rely on a client/server architecture and require clients to be connected to the server at all times. Instead, we propose, analyze and evaluate Match+Guardian (M+G), our secure peer-to-peer protocol for implementing online trading card games. We break down actions common to all TCGs and explain how they can be executed between two players without the need for a third party referee (which usually requires an unbiased server). In each action, the player is either prevented from cheating or if they do cheat, the opponent will be able to prove they have done so in a timely manner. We show these methods are secure and may be shuffled into new styles of TCGs. We then measure moves in a real trading card game to compare to our implementation of M+G and conclude with an evaluation of its performance on the Android platform, demonstrating that M+G can be used in a peer-to-peer fashion on mobile devices.
|
Dr. Michael Bowling
|
March 9, 2012 JGH 102 2:00 - 3:00 PM |
Department of Computing Science University of Alberta, Canada
http://cs.ualberta.ca/~bowling
AI After Dark: Computers Playing Poker
|
Author Bio:
Dr. Michael Bowling is an Associate Professor of Computing Science at the University of Alberta. His research focuses on machine learning, games, and robotics, and he is particularly fascinated by the problem of how computers can learn to play games through experience. Michael is the leader of the Computer Poker Research Group, which has built some of the strongest poker playing programs in the world. In 2008, one of these programs, Polaris, defeated a team of top professional poker players in two-player, limit Texas Hold'em, becoming the first program to defeat poker pros in a meaningful competition. His research has been featured on the television programs Scientific American Frontiers, National Geographic Today, and Discovery Channel Canada; on radio programs for CBC, BBC, and NPR; in print articles in the New York Times and Wired, as well as newspapers around the world; and twice in exhibits at the Smithsonian Museums in Washington, D.C.
Abstract:
The game of poker presents a serious challenge for artificial intelligence research. The game involves many sources of uncertainty including unknown opponent cards, unknown future events, and unknown opponent strategies. This talk will outline both the challenges and the state of the art in overcoming these challenges, identifying the key advances that led to Polaris, a program developed at the University of Alberta, which became the first computer poker program to defeat professional players in a meaningful competition.
|
Dr. Lee White
|
April 6, 2012 JGH 102 3:00 - 4:00 PM |
Professor of Computer Science Emeritus Case Western Reserve University Cleveland, Ohio
lwhite4939@aol.com
Firewall Models for Testing Changes in Configurable Software Systems
|
Author Bio:
Dr. Lee White received his BSEE degree from the University of Cincinnati in 1962, and the MSc and PhD in Computer Engineering from the University of Michigan in 1963 and 1967, respectively. He served as faculty and Chair in Computer Science departments at the Ohio State University, the University of Alberta and Case Western Reserve University, and presently is an Emeritus Professor at CWRU. His primary research interest is in software testing. He served as American Editor of the Journal of Software Testing, Verification and Reliability (STVR) from 1994-2006. He has published papers in regression testing, GUI testing, and most recently, testing of real-time systems. He has advised 20 PhD students to completion, and consulted for a number of industrial companies, including IBM, US Steel, Parker-Hannifin, General Electric, Monsanto, Allen-Bradley, Lockheed and ABB.
Abstract:
User Configurable Software Systems (UCSS) consist of very large general purpose software that can be specialized or configured for many different applications. This is accomplished by selecting a small number of configurable software elements (or modules) together with thousands of settings (parameters) that define the configuration. Two very important and large examples of UCSS are industrial control systems and Enterprise Resource Planning (ERP) such as SAP. This presentation deals with the problem where the user of a configuration software system makes changes in the configuration, and then errors in the software may cause their system to fail with the new configuration. The huge number of uses and possible configurations of the software prevent a complete testing. A new method will be described which uses software firewalls for regression testing to determine regression impact on the system due to changes in the user configuration and settings. These firewalls will be described, as well as a demonstration that only a relatively small number of firewalls will be needed. Empirical studies will be presented showing both effectiveness and efficiency in detecting software errors when used on real industrial software.
|
CRISP Workshop on Information Security and Privacy
|
April 20, 2012
Friday
Ritchie Center 8:30 AM - 4:00 PM |
Department of Computer Science University of Denver
More information
|
Dr. Deborah Glueck
|
April 27, 2012 JGH 102 3:00 - 4:00 PM |
Department of Biostatistics Colorado School of Public Health University of Colorado Denver
www.ucdenver.edu
Using the Java Web Services Architecture to Select Sample Size for Biomedical Studies
|
Author Bio:
Dr. Deb Glueck is an Associate Professor of Biostatistics at the Colorado School of Public Health, University of Colorado Denver. With Dr. Keith Muller at the University of Florida, she directs an interdisciplinary team of computer scientists, biostatisticians, and technical writers working on statistical methods and software development. The project is funded by a grant from the National Institute of Dental and Craniofacial Research (1 R01 DE020832-01A1).
Abstract:
A critical component of biomedical research is choosing the number of participants for clinical trials or observational studies. Several authors describe mathematical techniques to select an appropriate sample size. Translation of statistical methods into user-friendly software is necessary to make these methods available to researchers. To address the need for free, usable software, our collaborative team of computer scientists and researchers have developed GLIMMPSE, a free, open-source, web-based software package. The software utilizes the Java Web Services architecture to enhance modularity and scalability of the system. We describe the current software version, plans for expansion to mobile platforms, and our approach to software quality. We introduce a variety of unsolved problems which highlight the need for continued collaboration with computer scientists.
|
Undergraduate Research
|
May 4, 2012 JGH 102 3:00 - 4:00 PM |
Department of Computer Science University of Denver
|
Dr. Michael E. Goss
|
May 18, 2012 JGH 102 3:00 - 4:00 PM |
Geo Engineering Google, Boulder, CO
mikegoss@google.com
A Computer Scientist's Introduction to Geographical Information
|
Author Bio:
Dr. Michael E. Goss has been a Software Engineer at Google since 2005. He works on putting 3D buildings into Google Maps and Google Earth, and has also worked on geo-modeling features in Google SketchUp. Previous Geo-related projects include research as a Computer Science faculty member at Colorado State University, development of mapping and flight simulation software at a small start-up company, Merit Technology, and several years at E-Systems. Mike also spent ten years as a researcher at HP Labs. Mike has a Ph.D. and M.S. in computer science from the University of Texas at Dallas, and a B.S. in computer science from Michigan State University. Before he found out about Google Earth, Mike was known to sometimes spend long periods of time browsing the National Geographic Atlas just for fun.
Abstract:
Want to know why your GPS receiver doesn't show longitude 0 when you're standing on the Greenwich Meridian? Curious why "north" on your UTM grid doesn't actually point north? Wondered what it means when the data you received says it's relative to "NAD27" or "OSGB36"? Not sure what data structures to use to organize geographic data? It's all important to know if you're developing software and algorithms that use geographic data. Come to this talk to find out the answers to these and other perplexing questions.
|