Introduction
The Data team’s role in the Hyper A project is to calculate the performance metrics of each Design Team. Each Design Team provides the Data Team the component values for needed to calculate their metrics. These component values are cumulative for their entirety of their design.
For example, the Industrial Team’s Primary Metric is Energy Self-sufficiency Ratio:

The Industrial Team provides the Total Energy Generation and Total Energy Demand for the project, instead of multiple values for different design components. This separation allows the Design Teams to work flexibly and the Data Team to not become co-designers.
Target Metrics
Each Design Team has between one and four metrics for us to calculate


Seminar Progress
Conceptual Challenges:
As the Data Team we tasked ourselves early on with staying data-oriented and focusing on providing the insights and tools our Design Team’s needed to progress in their own models. We intentionally bound ourselves to not work in any BIM or CAD roles.
This puts us at a disadvantage as we do not have a three-dimensional representation of our work, outside of this seminar.
Additionally, we ask each Design Team to provide us combined data from their work, and we do not have access to the granular data used in Design. As such, we choose to focus on visualizing a quantity of combined data instead of an in-depth analysis of a single dataset.
Visualization Theory
In the process of the Hyper A project, we used some conceptual masses for our whole project’s initial design. Through the module, the design teams refined these masses into the current state of the Hyper A project.
We find it conceptually interesting to use these initial masses to display the current state of the design.

Project Health
Metrics Visualization
We assign our five masses to each represent one of the design teams, Facade, Residential, Industrial, Services and Structural.
In this visualization we combine all the metrics that each team is using to measure their performance. Since each metric is conceptually different we compare the calculated value to a goal value and calculate the percentage improvement (positive percentage) or shortcoming (negative percentage).
Example Data:

Average: -8.3%
Each Team’s combined performance is then compared and normalized on a 0.0 to 1.0 scale, with the top performing team is 1.0 and the bottom performing team is 0.0

Then these scales are used to apply red to green gradients to each team’s assigned massing, with a red mass signifying the bottom performing team and a green mass signifying the top performing team, using Revit Materials.


Individual Metrics
Each Team’s individual metrics are also displayed with separate views using a blue to yellow gradient. The gradient values are determined by comparing all metrics across the entire project with the top performing metric showing yellow and the bottom performing metric showing blue, using Graphic Overrides in views.
Since our data output will only include one value per attribute, we generated a secondary set with randomized data so we could compare and verify our results.
Optimized Data


Randomized Data










Combined Metrics
We built each Design Team a custom adaptive panel family to display their metrics in a combined form.


Panel Optimization

Primary Daylight Factor

Target Energy Generation Ratio
Custom Tags
We created custom tags which linked to the primary, secondary and tertiary metrics and their values to allow for automatic tagging of the relevant data display according to the view selected. These tags were linked to shared parameters, which were further nested into families for an easy translation of data as it flowed in from grasshopper nodes using Rhino.Inside.Revit.
Data Extraction
The data extraction strategy relied heavily on the information from our dashboards. The exported csv files were fed into preset algorithms that allows us to parse through the data and extract relevant items. This data was then fed into our individual metric calculators, which gave us the required values. We used these values to generate the gradient along the forms, using multiple different detail views to display the metrics in separate views. Each gradient corresponded to the value sent from grasshopper, with our sub metrics utilizing the option for Graphic Overrides to better explain visually the values used. These values were then compared to a normalized standard through which we developed visual methods to display each of the team’s performance efficiency.

Our custom Grasshopper Definition reads the csv data and calculates the metric values, percentage comparisons to goal values. These values are then used to build the adaptive panel components; apply comparison gradients using Revit Materials; apply metric gradients to eleven views; and send the metric component, calculated and percentage comparison values to Revit Shared Parameters.
Output Strategy
Our data is extracted from a combination of a total of 92 different models as we extract and verify the correct parameters according to the team metrics specified already. These models are coded into an algorithm that parses through the levels to find the information required.
Material Produced
- Revit Model
- Grasshopper Definition
- 3D View Drawings and Schedules (PDF)