Education

  • Computer Science, Master of Mathematics
    Researching Machine Learning under supervision by Kate Larson
    University of Waterloo
    Waterloo, Ontario, Canada

  • Bachelor of Software Engineering - With Distinction
    Awarded IEEE Victoria Section Gold Medal (top BSEng student)
    University of Victoria
    Victoria, British Columbia, Canada

Experiences

  • Software Engineer Co-op
    R&D, concept prototyping, and C# programming for reliable test equipment solutions for medical cardiovascular devices
    ViVitro Labs
    Victoria, British Columbia, Canada | On-site

  • Software Developer
    Work in Unity, Graphics Optimization, and C# programming for the popular Unity Asset 'Mesh Baker'
    Digital Opus
    Nelson, British Columbia, Canada | On-site

  • Software Developer
    Work with Agile, Azure Devops, automated build & testing in a 15 programmer team developing embedded systems for radio using Java, C#, and Python
    Codan Communications
    Victoria, British Columbia, Canada | Hybrid

Awards

Fulbright Student Award for 2025-2026

$25,000 USD | 2025

Merit-based residential exchange award for Canadian graduate students wishing to conduct research in the United States.

IEEE Victoria Section Gold Medal in Software Engineering

2025

Awarded to the top software engineering student at UVic.

Woods Trust Scholarship

$1,169 CAD | 2024

Awarded to academically outstanding undergraduate engineering students at UVic.

President's Scholarship

$1,331 CAD | 2024

Awarded to academically outstanding undergraduate students in all faculties at UVic.

3rd Place in UVic AWS DeepRacer Competition 2024

2024

Trained a reinforcement learning model to race an autonomous car around a race track in 9.016 seconds

"Stand out from the crowd" prize in Physics and astronomy

$600 CAD | 2021

Awarded to the student with the highest grade in the most challenging undergraduate course in physics and astronomy at UVic.

UVic Excellence Scholarship

$24,000 CAD | 2020-2024

Given to incoming UVic students with outstanding academic record

Dean's Entrance Scholarship

$1,331 CAD | 2020

Awarded to academically outstanding students entering engineering at UVic.

Papers

  • Distributed ML Property Attestation Using TEEs (unpublished) Authors: Idil Kara, Gavin Deane, Artemiy Vishnyakov • December 2025

    As large machine learning (ML) providers adopt model cards to document how models are trained, the question becomes: how can a verifier be sure that a card is honest? Prior work such as Laminator shows how a trusted execution environment (TEE) can produce a proof-of-training (PoT) artifact for a single node, attesting that its output model was trained on a specific dataset, architecture, and configuration. Modern training pipelines, however, are distributed and data-parallel. In this work we ask whether these single-node restrictions can be lifted to attest a distributed setting: if each individual node can attest that it behaved correctly, can we safely conclude that the whole system behaved correctly? Our key idea is to treat each worker as a Laminator-style prover and to run a coordinator inside a TEE that verifies worker PoT digests and aggregates their updates. Since the coordinator’s code is itself remotely attested, an external verifier only needs to trust the coordinator enclave; the distributed training job then collapses to a single PoT artifact stating that, if every node followed its attested code, the final model was trained as claimed or else the artifact fails to verify. We implement this protocol using PyTorch and a Docker-based TEE emulation, and evaluate it on data-parallel training over the CENSUS dataset. In our CPU-only prototype, attested runs incur a 2.2–3.1× slowdown (120–214% overhead) compared to an unattested baseline, with overhead scaling approximately linearly in the number of workers and epochs.

  • Prefill-Only Optimizations for Prefill-Decode Disaggregation in vLLM (unpublished) Authors: Sejal Agarwal, Maksym Bidnyi, Joshua Caiata, Gavin Deane • December 2025

    Disaggregating the prefill and decode steps in Large Language Model (LLM) inference has allowed for optimizing throughput and latency separately. Prior work has shown that hybrid prefilling and Job Completion Time (JCT)-aware scheduling can accelerate prefill-only workloads. This project considers whether these prefill-only optimizations can be used together with disaggregated prefill-decode, and what challenges exist in trying to use prefill-only optimizations in the disaggregated setting. We implement both techniques in vLLM’s prefill path and benchmark their performance against the standard disaggregated baseline. Across all loads, these changes underperform the baseline in request throughput, token throughput, and time-to-first-token. Our research reveals that while PrefillOnly-style gains are transferable in theory, they conflict with the coordination, memory behaviour, and compilation patterns of vLLM’s disaggregated architecture. Our key takeaway is that prefill specialization, while compelling in theory, is difficult to transfer in practice. We highlight where the combination of these two approaches breaks down and provide insight for potential avenues for improvement.

Skills

Programming

Experienced with Python, C#, Java, C, SQL, Matlab, Bash

Technical Communication

Excellent writing and oral communication skills in both industry and academic contexts

DevOps processes

Familiar with Agile, Git, Jira, Azure DevOps, Gradle

Integrated Development Environments

Extensive use of Jetbrains IDEs, VSCode, Eclipse.

Machine Learning

Familiar with PyTorch, data preprocessing, and ML serving pipeline

OS familiarity

Comfortable operating in both Linux and Windows for development

GenAI tool use

Highly proficient with ChatGPT, Gemini, and Claude with an understanding of their limitations