Incisive analysis

Definition
Incisive analysis (IA) focuses on maximizing insights from the massive, disparate, unreliable and dynamic data that are &mdash; or could be &mdash; available to analysts, in a timely manner. IA pursues new sources of information from existing and novel data, and developing innovative techniques that can be utilized in the processes of analysis. IA programs are in diverse technical disciplines, but have common features: (a) Create technologies that can earn the trust of the analyst user by providing the reasoning for results; (b) Address data uncertainty and provenance explicitly.

The following topics (in no particular order) are of interest to IA:


 * Methods for developing understanding of how knowledge and ideas are transmitted and change within groups, organizations, and cultures;
 * Multidisciplinary approaches to assessing linguistic data sets;
 * Methods for measuring and improving human judgment and human reasoning;
 * Methods for extraction and representation of the information in the non-textual contents of documents, including figures, diagrams, and tables;
 * Methods for understanding and managing massive, dynamic data;
 * Analysis of massive, unreliable, and diverse data;
 * Methods for assessments of relevancy and reliability of new data;
 * Methods for understanding the process of analysis and potential impacts of technology;
 * Multidisciplinary approaches to processing noisy audio and speech;
 * Development of novel top-down models of visual perception and visual cognition;
 * Methods for analysis of significant societal events;
 * Methods for estimation and communication of uncertainty and risk;
 * Novel approaches for mobile augmented reality applied to analysis and collection;
 * Methods for topological data analysis and inferences of high-dimensional structures from low-dimensional representations;
 * Methods for the study of algorithms stated in terms of geometry (computational geometry);
 * Methods for geolocation of text and social media;
 * Novel approaches to biosurveillance;
 * Methods to make machine learning more useful and automatic;
 * Methods to construct and evaluate speech recognition systems in languages without a formalized orthography; and,
 * Methods and approaches to quantifiable representations of uncertainty simultaneously accounting for multiple types of uncertainty.