Dr. Fredric M. Ham

Dr. Fredric M. Ham, IEEE Life Fellow, SPIE Fellow, and INNS Fellow is the Director of Science and Technology at Tricorp Business Solutions, and Professor Emeritus of Electrical Engineering at Florida Institute of Technology in Melbourne. He received his BS, MS, and PhD degrees in Electrical Engineering from Iowa State University in 1976, 1979, and 1980, respectively. He has over 35 years of professional engineering experience. From 1977 to 1978 he worked for Shell Oil Company as a Geophysicist. From 1980 to 1988 he was a Staff Engineer at Harris Corporation in Melbourne, Florida, where he worked in the Systems Analysis Group (one of many tasks he performed all of the error analysis for the fine guidance control of the mirrors on the Hubble Space Telescope for the spiral scan and orthogonal search modes). He also worked in the Large Space Structures Controls Group (there he developed robust control algorithms for flexible space structures). He was at Florida Institute of Technology from 1988 to 2014 where he was the Harris Professor, Dean of the College of Engineering and Vice President for Research. While at Florida Tech, in his Information Processing Lab (IPL), he developed methods for non-invasive glucose monitoring for diabetics (one licensed patent), robust neural-based classification methods for tactical and strategic infrasound applications and speaker recognition, and developed the P3CBT software package (a versatile event classification package that is available for licensing), and proactive predictive network security methods for Manets, to name a few. He has published over 100 technical papers, holds 3 U.S. patents and is author of the textbook: Principles of Neurocomputing for Science and Engineering, McGraw-Hill, 2001. Dr. Ham’s current research interests include: neural networks, deep learning, artificial intelligence, adaptive signal processing, biosensor development, speech recognition, pattern recognition, wireless network security, tactical infrasound, and development of neural-based classification methods using infrasound for monitoring nuclear explosions to support the Comprehensive Nuclear Test Ban Treaty. Dr. Ham is the Past President of the International Neural Network Society (INNS) (2007-2008), served on the INNS Board of Governors (2009-2011), and is a member of Tau Beta Pi, Phi Kappa Phi, Eta Kappa Nu and Sigma Xi.

Title Of The Talk: Artificial Intelligence – Where We are,Where We’re Going, and the Concerns

Abstract: The area of Artificial Intelligence (AI) has never been more popular than it is today. With great strides made in neural network learning, specifically Deep Learning, the advances in AI have been on the rise. The main reason deep learning is so popular today is due to these methods are more accurate at classifying images than humans, but also because high-performance GPUs allow fast training of deep learning networks, and very large amounts of labeled data (Big Data) necessary to train these networks are more accessible now. The main advantage of deep learning over standard machine learning methods is deep learning can learn features and tasks directly from the labeled data. Convolutional neural networks (CCN) and recurrent neural networks (RNN) (networks with cycles) are two popular training methods for deep neural network learning. CCNs are best suited for image data. CNN deep neural networks excel at object recognition and detection, object classification, as well as scene recognition. RNNs are very good for natural language processing (NLP). NLP involves the interpretation (and generation) of natural human language. This can establish a direct means for humans to interact with machines.
These new developments in AI systems have begun to further advance automation and robotic systems. In the case of industrial robots a new generation of automation is here now. Before, these robots were programmed to do a very specific task, not varying from this task as the robot only knows how to do one thing. However, now there are robotic systems that can perform many different tasks. They are able to do this because they have been designed to think on their own. This is possible because these robots learn. They can learn by trial-and-error through a reinforcement learning process. There are also robots that can learn a task by observing humans perform the same task. So comparing today’s robots to earlier ones, today’s robots have a degree of intelligence. It’s not just robotic systems today that possess intelligence; there are self-driving cars, AI systems that can diagnose diseases better than medical doctors, beat humans at their own games, compose music, and tirelessly perform legal discovery in law offices, to name a few. Beyond these successes in AI systems and intelligent robots there is now the yearning for these machines to be able to sense and react to human emotions. There has been some progress in this area but further advances appear to be on the horizon.
However, there currently exists a concern about how far we should go with an AI system’s ability to emulate human intelligence. There is uneasiness about having robots and AI systems that can reason, think rationally, act rationally, possess consciousness, and even have free-will. It’s the free-will that is probably most worrisome as we don’t want a singularity to occur. That is, to have AI systems and intelligent robots that transcend the intelligence of humans, taking over all our jobs, and more importantly leaving humanity with a situation where we no longer are able to control our own creations. So, it’s our responsibility as stewards of our society to ensure this does not occur, and we merge with these AI systems to make certain they act to augment what we do in our everyday lives.

Dr. Joseph Finkelstein

Dr. Joseph Finkelstein is the Director of Center for Bioinformatics and Data Analytics.He is also the Associate Professor at Columbia University Medical Center.After obtaining his medical degree Dr. Finkelstein completed PhD Program in Biomedical Cybernetics followed by post-doctoral fellowship in Biomedical Informatics at Columbia University. Dr. Finkelstein is an expert in development, evaluation and implementation of innovative patient-centered information technologies supporting personalized care. He is the author of over 100 peer-reviewed publications and several patents. Dr. Finkelstein has been a recipient of multiple grants from NIH, AHRQ and DOD. He served as a director of biomedical informatics program at Johns Hopkins University for 8 years. Dr. Finkelstein’s research is focused on predictive analytics for precision medicine. He develops clinical decision support tools supporting tailored patient engagement and empowerment, personalized care coordination, and individualized medication management based on pharmacogenetics testing. The Center for Bioinformatics and Data Analytics, led by Joseph Finkelstein, MD, PhD, develops, evaluates and implements innovative technologies supporting delivery of personalized dental care in the context of learning healthcare system.

Title Of The Talk: Big Data, Internet of Things, and Dentistry: What do They Have in Common?

Abstract: A learning healthcare system (LHS) is a care delivery environment supporting continuous “learning cycle” with the lessons from research and each care experience systematically captured, assessed, and translated into reliable care. The Institute of Medicine posited LHS as a powerful means to facilitate increased safety, effectiveness, and higher quality at lower cost. Extensive use of data science is a crucial component of LHS as it facilitates generation and application of the best evidence for the optimal healthcare choices tailored to each patient and provider. Electronic Health Records (EHR) represent a crucial component of LHS “learning cycle” as they provide an ongoing means for systematic data collection embedded into routine clinical care delivery. The widespread adoption of EHR systems offers a powerful resource for implementing knowledge discovery pipeline supporting the “learning cycle” of continuous quality improvement, clinical research and deep analytics. Major changes have been taking place in dental care delivery by expansive and ongoing digitization of dental services. Dental practitioners and researchers are increasingly amassing large and varied data sets across multiple dental subspecialties.
In addition to electronic dental record (EDR), most widely mentioned sources of digital dentistry include CAD/CAM and intraoral imaging, quantitative light-induced fluorescence for caries detection, computer-aided implant dentistry, intraoral and extraoral digital radiography, stereolithographic and fused deposition modeling. Together with oral microbiome, metabolomic, genetic and immune biomarkers, these data sets provide a very rich data resource at molecular, genetic, cell, organ, and system levels which require systematic application of data science to fully support continuous knowledge discovery cycle. Combining these data sets with information from electronic medical records (EMR), billing information, characteristics of providers, healthcare organizations and environment data could significantly facilitate evidence-based dental care delivery, generate new disease prevention and treatment strategies, and greatly advance the practice of precision dentistry. Successful applications of data analytics in precision oral health will be exemplified using diverse data streams ranging from large population-based data sets to RFID tagging data generated during routine dental care delivery.

Dr. Lanier Watkins

Lanier Watkins is a Lawrence R. Hafstad Fellow and holds dual appointments at the Johns Hopkins University Information Security Institute and the Johns Hopkins University Applied Physics Laboratory (JHU/APL). Prior to joining APL, he worked for over 10 years in industry. He first worked at the Ford Motor Company and then later at AT&T where he held roles such as systems engineer, network engineer, product development manager, and product manager. Dr. Watkins’ research presently encompasses the areas of critical infrastructure and network security. He holds a Ph.D. in Computer Science from Georgia State University where he was advised by Dr. Raheem Beyah, three M.S. degrees in Biotechnology (Johns Hopkins University), Computer Science and Physics (Both from Clark Atlanta University), and a B.S. degree in Physics (Clark Atlanta University). He has been on the TPC or an invited speaker in several conferences and serves as a referee for multiple IEEE journals. His areas of research interest include Computer Network Security, IoT Security, Vulnerability Monitoring & Analysis, and Big Data and Analytics.

Title Of The Talk: An Analysis of the Vulnerability of Hobby and Commercial Unmanned Aerial Systems (UAS)

Abstract:Unmanned aerial systems (UAS) are becoming ubiquitous in government and commercial applications. Among these applications are: precision agriculture, infrastructure inspection, mining, disaster response, and film making, just to name a few. By 2025, the UAS market is predicted to grow into a $84 billion industry. This puts the top UAS vendors (DJI, Parrot, and 3DR) at the forefront of innovation and product development for this industry; however, since these vendors have not considered the impact of cyber security on their products, this multi-billion dollar industry is at risk. In this presentation, I will: (1) demonstrate that the products of these top vendors are vulnerable to the simplest of cyber-attacks and (2) propose a high-level multi-layer security framework to protect against these basic attacks. Since this security problem (i.e., the UAS Security Problem) is complicated by the need to secure multiple products produced by multiple vendors, I will simplify this task by focusing mostly on the products of Parrot for this talk.

Dr. Robert Mitchell

Robert Mitchell is currently a member of technical staff at Sandia National Laboratories. He received the Ph.D, M.S. and B.S. from Virginia Tech. Robert served as a military officer for six years and has over 12 years of industry experience, having worked previously at Boeing, BAE Systems, Raytheon and Nokia. His research interests include game theory, linkography, moving target defense, computer network operations, network security, intrusion detection and cyber physical systems. Robert has published 22 peer reviewed articles..

Title Of The Talk: A Game Theoretic Model of Computer Network Exploitation Campaigns

Abstract: Increasingly, cyberspace is the battlefield of choice for twenty first
century criminal activity and foreign conflict. The Ukraine power grid attack of December 2015, the April 2016 United States Democratic National Committee (DNC) hack, the Banner Health spill of June 2016,the October 2016 Dyn Domain Name System (DNS) Distributed Denial of Service (DDoS) attack, the San Francisco Municipal Transportation Agency (MTA) ransom of November 2016 and the May 2017 WannaCry pandemic exemplify this trend’s high impact. Management and incident handlers require decision support tools to forecast impacts of their investment decisions and technical responses, respectively.
Mathematical (e.g., closed form equations) and stochastic (e.g., Petrinets) models provide rigorous insight based on first principles.However, these models fall short when it comes to the human element of cyber warfare. Simulations and emulations suffer from the same flaw. Furthermore, simulations and emulations are not viable at Internet of Things (IoT) scale. Despite these shortcomings, leadership and incident responders still require situational awareness.
Game theoretic models consider the collaborative and competitive interaction between rational players striving to achieve their own best possible outcomes. This technique has yielded great results in the field of economics where eleven game theorists have won the Nobel prize. We propose a game theoretic model based on a six phase model of computer network exploitation (CNE) campaigns comprising reconnaissance, tooling, implant, lateral movement, exfiltration and cleanup stages. In each round of the game, the attacker chooses whether or not to continue the attack, nature decides whether the defender is cognizant of the campaign’s progression, and the defender chooses to respond in an active or passive fashion, if applicable. Decision makers can use this game theoretic model to implement defensive measures, and information security practitioners can develop tactics, techniques and procedures (TTPs) that render attacks inviable.
First, we proposed an extensive form game: This game is noncooperative, asymmetric, non zero sum and sequential; the current iteration of the game uses only discrete strategies and assumes perfect information. Next, we converted the extensive form game into normal form to facilitate analysis. Third, we identified the payoff functions for the attacker and the defender for every pure strategy set in the game. Attacker related parameters include cost of executing each attack phase and the value of the target data. Defender related parameters include cost of responding to each attack phase, the cost of spilling the target data and the value of the threat intelligence harvested from each stage of the attack. Nature related parameters include the probability of detecting each attack phase. Finally, we implemented an algorithm to find the pure strategy Nash equilibria given any parameterization of the game.

Prof. Eric Balster

Prof. Eric Balster is the Director of Computer Engineering Graduate Program at the University of Dayton. He completed his Ph.D. from the Ohio State University. His research interests include Signal and image processing and compression, Video processing and compression, Embedded systems and Digital design. He is a Senior Member of the IEEE and has several papers published in reputed journals. Prof. Balster received the Wohlleben-Hochwalt Outstanding Professional Research Award in 2011 and the Sensor’s Directorate Dr. James Tsui Award in 2010.

Title Of The Talk:Persistent Surveillance Processing Utilizing FPGAs.

Abstract:Over the past decade or so, the relatively low cost and high pixel densities of digital imaging technology has created a new type of surveillance capability called Wide Area Persistent Surveillance.In general,Persistent Surveillance Systems have a much larger data processing requirement than more traditional surveillance technology. Furthermore, the additional data processing requirement of Persistent Surveillance Systems comes with no further resource availability such as additional size,weight, or power(SWaP) to handle the increased computational load.
Over roughly the same time period, field-programmable gate array (FPGA) technology has increased clock rates by 40 times, become 300 times more dense and 10 times less expensive. Additionally, FPGAs have added a whole host of new features to allow for high-speed low-power processing solutions to become a reality.
This keynote address will discuss some uses of FPGA technology to enable and enhance the capability of Wide Area Persistent Surveillance Systems.

Prof. James B Cole

James Cole is currently a professor at the University of Tsukuba in Japan. He graduated from the University of Maryland, PhD physics (high energy and particle physics), and after a post-doctorate at the NASA Goddard Space Flight Center (Laboratory for High Energy Astrophysics) he went to the Army Research Laboratory (ARL) , and then the Naval Research Laboratory (NRL). At ARL he developed simulated annealing programs for pattern recognition, and at NRL he began his current research. Prof. Cole wrote parallel computer programs with advanced visualizations to model sound propagation in complicated ocean environments. He developed the first high accuracy nonstandard finite difference time domain (NS-FDTD) algorithms. In 1995, he became a professor at the University of Tsukuba in Japan. His main focus is to develop high precision algorithms with good numerical stability on coarse numerical grids, but which are simple enough to run on small computers.

Title Of The Talk:The Philosophy of Models,what they can and cannot tell us

Abstract:Most real-world systems are too complicated to comprehend, so we have no choice but to resort to simplified models. A good model has the same essential features of the system we wish to understand, while excluding the “clutter” of unimportant features. It rather amazing – even miraculous – that a good model – simple though it may be – can predict as yet undiscovered features of the real system.
In this lecture we give several examples of simple models that explain more than we might have expected, ranging from economics to computational electromagnetics and photonics to fundamental physics.

lutz

Prof. Lutz Sparowitz

Dr. Sparowitz (born in 1940 in Graz, Austria) received his Ph.D. degree in Engineering Sciences from the Technical University of Graz in 1974. Thereafter, he worked as an engineering consultant at a local firm until 1988, when he joined the University of Life Sciences in Vienna, Austria, and became a full professor and director of the Institute for Structural Engineering. Since 1993 he moved to the Technical University of Graz as a full professor and director of the Institute for Concrete Construction until the end of 2009 when he became an emeritus professor. In 2003 he co-founded the S&W (Sparowitz & Wörle) Engineering Consulting Ltd in Graz with his partner, Dr. Pius Wörle, and remains managing the company until now.
Since 1998, Prof. Sparowitz’ research has focussed on UHPC (Ultra High Performance Concrete) and worked not only on the development of the material-technology, but also on the design of different structures made of UHPC in several practical projects culminating in numerous significant Austrian Awards: the 2007 Consulting Engineers Award for an UHPC-bridge built in Austria, the 2008 Austrian State Award for the best innovative project of the year, and the 2009 Dr. Wolfgang Houska Award (with a prize of € 100.000) for the best research activity of the year. Since 2010, he has conducted his research in cooperation with the university to develop an advanced mobility system, called QUICKWAY, which offers a viable solution to the critical traffic problem of large smart and green cities. In recent years, his research was conducted to investigate the applicability of UHPC for Hyperloop tubes.

Title Of The Talk:QUICKWAY – An Intelligent City-Highway Network

Abstract: In the smart and green city of the future special importance has to be given to an unlimited mobility of the citizens, including elderly and disabled persons. Because of this, first and foremost the urban planners have to face the traffic problem.
The QUICKWAY-System is the definite solution for that: The major traffic runs canalized and concen-trated over a city-highway network – called QUICKNET. That motorway-net consists of safe elevated carriageways, which are always crossing each other on different levels (grade-separated crossings).
The size of the highway-mesh is preferably 640 x 800 meters. The mesh areas surrounded by the highways are traffic calmed zones. There live and work the people and exclusively originating and terminating traffic is permitted, in order to enable the desired comfortable “door-to-door service”.
The QUICKWAY Navigation System (QNS) is the heart of the QUICKWAY System. The QNS-software processes in real time the current positions and destinations of all vehicles in operation, to optimize the traffic flow and it guides each vehicle along its ideal route. Driverless vehicles are directly steered by QNS along their routes, driver-controlled vehicles are directed by voice navigation – as usual today.
Solely autonomous (driverless) electric powered vehicles (buses, light trucks, vans, taxis, cars etc.) are led by QNS to the faster QUICKNET. During the heavy traffic in rush hours, vehicles move there in a line of traffic with very short headway distances (~1.0 meter).
Driver-controlled vehicles and heavy goods trucks (total weight > 5 tons) have to operate on level 0 below the elevated QUICKNET- navigated by QNS as well.
The public traffic is covered by rather small agile driverless buses, running exclusively in the QUICKNET. They have 16 folding chairs, respectively up to 40 standings during rush hours. Each mesh contains up to four bus-stops at level 0 on half way between two nodes of the mesh. QNS manages the bus-traffic too. Passengers order a bus by smartphone. Buses do not drive along fixed routes (as it is usual today) and they do not stop at each station. They more or less work like large hailed shared taxis.
Surely the driverless QUICKTAXI is the most important means of transport in the QUICKWAY System, because taxis, company-owned cars and privately owned cars – all ordered by smartphone – offer the above mentioned “door to door service”. After alighting on particular stopping lanes, the vehicles move on. Parking places along the streets are substituted by underground carparks.
As a combination of the features mentioned above (and others), the QUICKWAY System increases the traffic performance many times over.
This key note explains, how the QUICKWAY traffic system works in principle. The presentation gives also an introduction, how a smart and green lifestyle city with embedded QUICKWAY traffic system should be designed. And finally the main requirements on the QUICKWAY navigation software are discussed.

0e8e8e7

Prof. Robert Steele

Dr. Steele serves as Director of Florida Polytechnic University’s Health Informatics Institute and as a Full Professor. Dr. Steele holds a PhD. in Computer Science, has authored over 120 peer-reviewed publications and his work has been patented and successfully commercialized. He has extensive experience securing competitive external research funding and particularly with partnering with industry to do so. His research interests include mobile information systems, informatics, cybersecurity, machine learning, sensor-based systems and analytics. He has also served as the Vice Chair of ACM SIGMOBILE.
Professor Steele has a leading international research record and is a national leader in the field of health informatics and analytics. He also has extensive administrative experience in large, complex and leading research institutions internationally and at all levels of program leadership, curriculum development and teaching. Prior to commencing at Florida Polytechnic University he was at the Medical University of South Carolina where he served as a Division Director of the Division of Health Informatics and Full Professor. He has also previously served as Head of Discipline and Chair of Health Informatics at The University of Sydney and has led numerous other programs, and has extensive experience in leading large curriculum development and improvement.

Title Of The Talk:Data Analytics and Learning Systems: The Role of the Internet-of- Things

Abstract:Predictive models can enable the prediction of target values for previously un-seen input sets.Historically the volume of digitized data has been significantly lower and the processing power of machines comparatively less. In the past, this has contributed to limiting the domains that predictive models could be applied to and the impact of these models when applied. Similarly, the application of such models to improve the operation of real-world systems in various business and consumer domains, with the goal of having these systems improve or ‘learn’ over time, has faced concomitant limitations.
The Internet-of- Things represents a transformative development in the role and capabilities of predictive models, with this current era of research providing the opportunity for great achievements and high-impact contributions.
The Internet-of- Things, not a single system nor a system emerging from a ‘big bang’ implementation, but rather an incremental and increasing pervading of sensing, computation and actuation capabilities into the physical world and the entities within it, represents a significant development in data acquisition, processing and other capabilities. In this sense, The Internet-of- Things will support the acquisition of new types of datasets with greater ranges of attributes and domains of application. Via looking at research in various sectors, the implications of the Internet-of- Things for learning systems, systems that are improved incrementally over time via measurement, feedback
and improvement will be covered.