HomePredictive Analytics - Ariel Partners

Ariel Partners provides a wide range of services that enable organizations to unlock the power of data to yield insights and support smarter business decisions. Predictive analytics is the analysis of past and current data to make predictions about future trends and events. Predictive analytics provides answers to the “what if…” questions and allows organizations to make informed decisions about their future strategic plans and goals, and to be able to react effectively to new developments and emerging threats. Predictive modeling can take short-term, seasonal, annual, and longer-term trends into account, to yield results that are much more likely to be accurate. Our consulting team employs big-data analysis techniques including data mining, data classification, statistical analysis, process analysis, linear regression, and machine learning.

Data Lakes

Traditionally, organizations sought to create a centralized Enterprise Data Warehouse that provided a single source of truth, and which could produce reports that combined data from multiple source systems. Data warehouses are important and useful, and it would be an exaggeration to say they are obsolete. However, there are reasons to look beyond data warehouses towards new “data lake” technologies. Unlike data warehouses, a data lake can accommodate unstructured and semi structured data such as documents, in addition to structured data. Data lakes can be set up very quickly and can start producing meaningful results in days and weeks, rather than months and years.

Data Maturity

It can be useful to think about an organizations use of data as progressing towards greater levels of maturity.

At first, the main concern of an organization may be to produce a set of canned tabular reports to satisfy compliance and regulatory concerns. The main issues at this phase have to do with consistency and accuracy. For example, corrections coming in after the fact could lead to differences in reporting at the aggregate level depending on when a given report was run. The fix for this issue is to use a “type 2” slowly changing dimension, so that the report can be run at any time while specifying a given “as of” date to yield 100% temporally consistent results.

Choropleth Chart

The next step is to produce dashboards using powerful visualizations such as a choropleth (illustrated above) – complete with sophisticated sorting and filtering, as well as trendline charts for tracking changes.

Going beyond this requires machine learning algorithms to produce extrapolated curves that reflect near-term trends, seasonal trends and long-term socio-economic trends via interpolation. For example, there may be differences that correlate to the school year. In addition, we want to take into account macro trends such as gentrification, demographic changes in average age, religion, race and culture, that may affect the projections.

Ultimately we can produce interactive simulations where we set up models that dynamically project future trends and enable stakeholders to perform what-if scenarios to explore the possible effect of modifying various configurations and input parameters.   What kinds of things can we do with these models?

  • Explore how different future locations and configurations of new homeless shelters, transitional living units, and subsidized housing would change the face of homelessness in a major metropolitan city.
  • Examine the effect of natural catastrophes and climate change on property casualty insurance claims in the southwestern US.
  • Analyze different possible approaches for improving retention of skilled aircraft maintenance personnel

These models are extremely valuable business tools to make your business more competitive in today’s marketplace.

Our Services

Data Discovery, Analysis and Interpretation – this involves collating, classifying, analyzing and interpreting the data aggregated from a multitude of sources within an organization. We perform trend analysis and present our findings that management can make informed decisions in current and future processes.

Data Modeling and Visualization – our consultants work closely with an organization’s subject matter experts to design dimensional models and visualization solutions that support the organization’s requirements and goals.  Rather than designing everything up front, we create designs that are extensible incrementally, enabling our customers to gain access to powerful new visualizations much sooner.

Data Warehousing – our data warehouse experts help an enterprise better manage and leverage its data through integrating multiple sources of data. We provide solutions that are powerful and scalable that will meet the volume, growth and velocity of the organization’s data. We use the Kimball-Corr agile data warehouse method that leverages agile techniques such as model-storming to quickly create a bus matrix that guides and prioritizes our efforts.

Data Cleansing and Conforming – Before data can be transformed into knowledge, i.e. visualizations, tabular reports, analytics, predictive analytics, it must be loaded and prepared via a process often called “Extract Transform Load” or ETL.  We suggest a variation called “Extract Load Transform” where data is first staged (loaded) into the data lake and then transformed in multiple passes.  We perform steps such as the following:

  • Data profiling: checking the volume of data and how quickly it changes and what schema it conforms to.
  • Cleaning and conforming: what is the granularity of the data? Do we need to adjust it in order to make certain queries possible? Are there missing records?
  • Data cleansing: are there inconsistencies in the data? Does the data violate the schema in some cases?
  • Generating metadata: We need to create metadata for the data source itself, and also maintain reference data such as for example aircraft type or maintenance status, so we can make it easier to query later in tools like Apache Atlas.
  • Master data management: business rule enforcement, reference data management, hierarchy management, entity resolution.
  • Error events: do we need to generate error log events and hold certain records in “suspense” because they do not meet the minimum quality threshold? Are errors escalated appropriately and do alerts get sent out as needed?
  • Audit metadata: we need to track the load date, job name and user so we know when and how items are loaded into the data lake. This information is sent to tools such as Apache Atlas.
  • De-duplication: are their duplicative records? What are the survivorship rules?
  • Unique ID generation and cross referencing: we need to generate unique IDs for columns that warrant them, and create cross references.
  • Lineage and dependencies: what is the origin of the data, and does it have any dependencies on other data sets?

Data Migration and Cloud Migration – our consulting team has the breadth of knowledge and industry experience to manage all aspects of migrating legacy systems to the cloud. Setting up a data solution in the cloud may be an excellent way to start an organization’s cloud migration, particularly if the organization has a lot of interconnected legacy systems that may be difficult to migrate.

Data Governance — As new more automated capabilities come online and enhance human decision-making, it becomes ever more critical that the data is at all times consistent, available, and fit for its intended purposes. Within big data environments, this entails navigating complex tradeoffs and ensuring we always have transparency and context (i.e. metadata). For example, we cannot simply enforce perfect data quality, because it could preclude us from accessing legacy data sources that contain dirty (but very valuable) data. We will apply consistent design concepts such as assigning metadata tags to flag data quality issues and accommodating conflicting or inconsistent data so it can be referenced when appropriate. We know that, while making a needed data fix directly to a source system to correct an error is preferable, it is not always possible, so the transformation scripts must be able to consistently reapply fixes, lookups, de-duplication, and other improvements during ELT.

ISO 27001:2013
Accredited Kanban Consultant
Scrum Foundation Educators Certification
Better Business Bureau Accreditation Ariel Partners

Copyright 2023 Ariel Partners. All rights reserved.

Copyright 2023 Ariel Partners. All rights reserved.