RAC 2017 Overview and Stats


List of Resources for Research Groups (RRG) Awards (PDF)  (XLSX)
List of Research Platforms and Portals (RPP) Awards (PDF)  (XLSX)
CFI Challenge 1 Allocations (PDF)  (XLSX)

Table of contents:


Compute Canada reserves 80% of its resources to the Resource Allocation Competitions (RAC), leaving 20% for use via our Rapid Access Service. Unlike other countries around the world, in Canada, we do not have a specialized provider serving very high-end needs, separate from general-purpose need.

We have done an analysis of our largest users compared to the bulk of our users (who do not have RAC awards) and have found the quality of their publications (by field-weighted citation impact) to be similar. In other words, both large and small Compute Canada users achieve high levels of scientific impact with the research performed with the assistance of Compute Canada resources. View our bibliometrics report here.

If you have questions about the terminology used in this page, please consult the Compute Canada Technical Glossary. For other questions or general inquiries, email our RAC team or visit our Frequently Asked Questions page.

(Please note: all data contained within these pages is current as of April 29, 2017.)

Table 1: Applications submitted to the Resource Allocation Competitions
between 2011 and 2017

YearResources for Research Groups(RRG)Research Platforms and Portals (RPP)CFI Cyberinfrastructure Challenge 1Total
2017345645414
201632442366
201533515350
2014291291
2013211211
2012159159
2011135135

Computational Resources

CPU Allocations

Based on available computing resources for 2017, Compute Canada was only able to allocate 58% of the CPU (core years) requests. However, as Tables 2 shows, the allocation success rate for CPU improved slightly in 2017 compared to 2016. Note that more than 50,000 of the cores available in 2017 are new and are higher performance than those they replace.

Table 2: Historical CPU demand vs. supply (core years)

YearTotal CPU CapacityTotal CY RequestedTotal CY allocatedAllocation success rate
2017182,760254,251147,3840.58
2016155,952237,862128,4630.54
2015161,888191,690123,6990.65
2014190,466172,989133,5080.77
2013187,227142,106126,6770.89
2012189,024103,84587,3120.84
2011132,31672,84875,4711.04

Table 3: 2017 CPU demand vs. supply by system

CPU System2017 Allocatable Capacity (core-years)2017 Total Request (core-years)2017 Total Allocation (core-years)Fraction Allocated
briarée7,0008,5965,1070.73
bugaboo4,2205,6583,5070.83
CAC*8401,0798000.95
cedar-compute24,00036,93521,0760.88
glooscap1,3447865640.42
gpc_ib30,91245,08128,5600.92
graham-compute**31,00047,16425,6430.83
grex3,7124,5792,4640.66
guillimin17,24023,78813,9030.81
mp230,98441,60024,8620.8
orca7,6809,5024,6870.61
orcinus9,61613,7836,7390.7
parallel6,3367,0474,5060.71
placentia2,6362,7501,5910.6
psi9005153340.37
sw_21,0761,6907800.72
tcs3,2643,6762,2610.69
Total182,760254,251147,3840.81

* Total capacity of the CAC cluster is 2600 core-years, but the allocatable capacity for 2017 is 840 core-years. So in this case the allocation target is 100% of 840 and not 80% like in the other systems.

** Allocation in Graham will increase to more than 85% in September 2017 when GPC is decommissioned and some of its current users are moved to the new system.

GPU Allocations

Constraint in GPU resources was greater than in CPU. As Table 4 shows, the demand for GPUs has increased 4.5x since 2015. In spite of the increase, the allocation success rate was 38%, compared to 20% in 2016.  GPUs in the newest systems have much greater performance than legacy GPU devices

Table 4: Historical GPU demand vs. supply (GPU years)

YearTotal GPU capacityTotal RequestedTotal allocatedAllocation success rate
20171,4202,7851,0420.38
20163731,3572690.2
20154826083000.49

Table 5: 2017 GPU demand vs. supply by system

GPU System2017 Allocatable Capacity (GPU-years)2017 Total Request (GPU-years)2017 Total Allocation (GPU-years)Fraction Allocated
cedar-gpu5841163506.70.87
graham-gpu320843253.90.79
guillimin-gpu6416756.90.89
guillimin-phi1005425.20.25
helios-gpu7223061.00.85
parallel-gpu180326138.40.77
monk-gpu*10000.00.00
Total132027851042.10.79

*Monk-GPU is out of warranty but available for opportunistic use at the user’s risk.

Cloud Allocations

The installation of Arbutus at the University of Victoria campus increase our cloud computing capacity from 104 nodes to 290 at UVic, plus 36 nodes in Cloud East at U. de Sherbrooke. Storage at UVic was quadrupled, to over 2.2petabytes.  We received requests for 9,152 VCPU’s and our capacity was 23,040 VCPU’s. As the need and awareness for cloud computing resources grows we anticipate more requirements in this area.

Table 6: 2017 VCPU demand vs. supply by system

Cloud system2017 Allocatable Capacity (VCPU)2017 Total Request (VCPU)2017 Total Allocation (VCPU)Fraction Allocated
arbutus-compute-cloud14,5926,7783,7870.26
arbutus-persistent-cloud7,2962,3741,9900.27
East-cloud*1,15200
Total23,0409,1525,776.60.25

*East-cloud is available for users needing cloud resources without an allocation via our Rapid Access Service.

Storage Allocations

The incorporation of the new systems Cedar (SFU), Graham (Waterloo), and Arbutus (Victoria) made possible for Compute Canada to meet the storage demand in 2017, as Table 7 shows.

Table 7: 2017 Storage Supply vs. Demand by Storage Type (TB)

Storage type2017 cluster capacity2017 Total Requested2017 Total AllocatedSuccess rate
Project43,15131,33530,1460.96
Nearline83,33316,64016,8921.02
Cloud660518.5518.51.00
Total127,14448,493.547556.50.98

Table 8: 2017 Project Storage Supply vs. Demand (TB)

Project Storage2017 Allocatable Capacity2017 Total Request2017 Total AllocationFraction Allocated
briarée2002031570.79
bugaboo1,1107867660.69
CAC1,0001,2311,0811.08
global_c6423793790.59
gpc_ib3,0002,1041,6640.55
guillimin-datastar3,8003,6773,6770.97
helios020
mp28009266930.87
NDC-SFU14,9009,3009,1430.61
NDC-Waterloo15,00010,82110,8450.72
NDC-UVic2,4431,8001,6430.67
orcinus25698980.38
glooscap0800
Total43,15131,33530,1460.70

Table 9: 2017 Nearline Storage Supply vs. Demand (TB)

Nearline Storage2017 Allocatable Capacity (TB)2017 Total Ask (TB)2017 Total Allocation (TB)Fraction Allocated
guillimin-datastar2,5009359440.38
HPSS12,5005,4835,8860.47
mammouth-archive8,33390300.00
NDC-SFU*30,0003,2143,2140.11
NDC- Waterloo30,0006,9186,8180.23
Total83,33316,64016,8920.20

* NDC = National Data Cyberinfrastructure

Table 10: 2017 Cloud Storage Supply vs. Demand (TB)

Cloud storage (Ceph)2017 Allocatable Capacity (TB)2017 Total Request (TB)2017 Total Allocation (TB)Fraction Allocated
arbutus-storage-cloud560518.5518.50.93
East-cloud100000
Total660518.5518.50.79

Acceptance Rate

Submissions are evaluated for both technical feasibility and scientific excellence. For the 2017 competitions, 414 applications were submitted and 390 allocations were awarded. Note that virtually all of applicants are requesting resources to support research programs and HQP that are already funded through tri-council and other peer-reviewed sources.

This year’s resource allocations competition awarded 58% of the total compute requested and 98% of the total storage requested. Due to the competitiveness of the proposals and the limited amount of computing resources available, all projects, across all disciplines, received final allocations less than their original request.

Table 11: Requests vs. Allocations (broken down by resource)

RAC 2017Number of
Requests Received
Number of Requests Granted
Storage282271
CPU351314
GPU4234
Cloud (VCPU)4641

Allocation Process

  • Compute Canada Technical staff review each proposal;
  • A peer review panel evaluates each proposal:
    • Each proposal receives multiple independent reviews;
    • Scientific committees meet to discuss the applications;
    • The peer review panel may or may not recommend specific cuts for an application;
    • The peer review panel gives a final science score on a 5-point scale;
  • The committee of RAC chairs endorses a scaling function based on science score. That scaling function is applied to all compute requests.

Scaling for Compute Requests

As in previous years, in 2017 the available compute resources were not enough to satisfy the demand. This is because a considerable number of legacy systems are being removed from service at the same time that the new systems are coming online.

The scaling function applied to the 2017 competition (see chart below) was set so that only applications with a science score of 2.25 or higher received an allocation, with a maximum of 87.5% for those with a score of 5. Note that those who did not receive a compute allocation can still make opportunistic use of system via our Rapid Access Service.

Monetary Value of the 2017 Allocations

These values represent an average across all Compute Canada facilities and include the total capital and operational costs incurred by Compute Canada to deliver the resources and associated services. These are not commercial or market values. For the 2017 competition, the value of the resources allocated was calculated on a per-year basis using the following rates:

  • $188.84 / core-year
  • $566.52 / GPU-year
  • $128.00 / TB-year
  • $40.50 / VCPU-year
  • $178.50 / cloud storage TB-year (Ceph)

Please note that the valuation of each of of these resources goes down each year as older, more expensive, resources are retired and replaced with newer, more cost effective, resources.