Independent Research

Research Portfolio

Exploring AI in education and cybersecurity challenges in the quantum computing era through rigorous independent research.

← Back to Portfolio
AI IN EDUCATION

AI as Co-Teacher: Human-Centered Design, Learning Outcomes, and Ethical Tradeoffs

Comprehensive research synthesis exploring AI's role in education under mentorship of Dr. Natasha Mancuso, EdD (Foothill College, Stanford Global Studies Fellow)

PERIOD
2010-2025 Literature Review
SCOPE
38 empirical studies + 6 policy reports
MENTOR
Dr. Natasha Mancuso, EdD
STATUS
Complete • Expanding with Primary Data

Abstract

Artificial intelligence (AI) is transforming education by reshaping how teachers teach and how students learn. As AI evolves from experimental tools into instructional partners, the central challenge is ensuring it strengthens rather than replaces the teacher's role.

This study investigates AI's potential as a co-teacher—a collaborator that enhances pedagogical effectiveness while preserving essential human elements. Drawing on 38 empirical studies and 6 policy reports from 2010-2025, including rigorous randomized controlled trials, systematic literature reviews, and empirical stakeholder studies, this paper examines AI's full impact on learning outcomes across multiple disciplines, cultures, and educational contexts.

Key findings: AI tutoring and adaptive learning platforms improve outcomes in structured subjects such as mathematics and science. Teachers report lower administrative burden and more time for mentoring when AI supports grading and feedback. Students show higher engagement when AI supplements rather than substitutes direct teacher interaction. Ethical issues, including transparency, bias mitigation, and equitable access, remain central to sustainable adoption.

Overall Finding: Evidence suggests that AI's educational value lies not in automation but in collaboration. When guided by teachers and grounded in ethical design, AI can extend the reach of instruction without diminishing the human connection essential to meaningful learning.

Five Key Findings

1. Context is Everything

AI's effectiveness varies dramatically based on learner expertise, task complexity, and implementation quality. Success depends on three consistent factors: (1) teacher oversight, where educators interpret and apply AI-generated insights; (2) adequate infrastructure and training; and (3) culturally responsive design.

2. The Equity Paradox

AI presents fundamental tension—while personalization could help marginalized students, algorithmic bias and unequal access could worsen social inequalities. Wealthier schools achieve higher gains due to better infrastructure, teacher training, and reliable internet access.

3. Privacy is Paramount

Data privacy and security emerge as the primary concern, with stakeholders rating it 4.18/5.0 in urgency (p<0.01). Policy frameworks must evolve to ensure AI implementation aligns with equity and safety standards through ethical review committees, data transparency laws, and independent audits.

4. Teacher Augmentation, Not Replacement

Most successful AI applications enhance teacher capacity. A Stanford study shows AI feedback on instructor communication improved teaching quality and student satisfaction. The most effective models view AI as a collaborative cognitive partner that amplifies teachers' instructional reach while maintaining emotional and cultural connection.

5. Ten Interconnected Challenges

Research with 260 stakeholders identified ten significantly correlated concerns (p<0.01) requiring holistic strategies: data privacy, algorithmic bias, teacher agency, equity, student motivation, over-reliance on automation, transparency, cultural responsiveness, long-term retention, and systemic policy alignment.

Methodology

This research follows a structured literature review approach, analyzing academic and policy-based sources from 2010 to 2025. The goal was to identify where AI has demonstrated success in improving learning outcomes, supporting teachers, and promoting equity.

Data Collection: Sources were collected using databases such as Web of Science, ERIC, Scopus, and Google Scholar, as well as education policy repositories maintained by UNESCO, OECD, and the U.S. Department of Education. An initial pool of over 70 publications was screened for relevance, resulting in 38 empirical studies and 6 policy reports that met inclusion criteria.

  • METR Study (Becker et al., 2025): RCT with 453 developers testing AI productivity—found 19% slowdown for experienced users, revealing context-dependent effectiveness
  • Kulik & Fletcher (2016): Meta-analysis of 50 ITS studies showing +0.66 SD gains
  • Zhou et al. (2020): Rural education RCT in China with 1,200 students—closed 60% of achievement gap
  • Carnegie Learning (2018): Multi-school MATHia study demonstrating faster mastery with teacher check-ins
  • Limna et al. (2022): Systematic review of AI applications in education with 150 teachers
  • Al-Zahrani (2024): Empirical study of 260 educational stakeholders on ethics and perceptions
  • Additional Studies: Khan Academy (Khanmigo), Squirrel AI (10,000+ students), TeachFX pilots, and policy reports from UNESCO, OECD, and U.S. Department of Education

Research Outcomes: Across the 38 reviewed studies, approximately 70% reported measurable academic improvement, 20% reported mixed or neutral outcomes, and 10% found no significant change. The strongest effects appear in structured domains (math, science) with clear learning progressions.

🔬 Ongoing Expansion: Currently conducting primary research through educator surveys to validate findings with real-world practitioner perspectives and identify implementation barriers in K-12 settings. Additional insights will be incorporated from TeachNova platform data (hundreds of educators across 5+ countries).

Real-World Application

Research findings directly informed TeachNova, an AI-powered education platform serving hundreds of educators across 5+ countries, implementing personalized instruction, teacher augmentation, standards-aligned content, and multilingual delivery.

Complete research paper with Executive Summary, Five Key Findings, and comprehensive methodology

Download Full Paper (PDF)
CYBERSECURITY

Quantum Computing's Impact on Global Security Infrastructure

Comprehensive analysis of quantum computing's transformative effects on cryptography, national security, and digital infrastructure resilience in the quantum era

PERIOD
October 2024 - October 2025
SCOPE
42 academic papers + 6 policy reports
MENTOR
Director at Apple
STATUS
Complete • October 2025

Abstract

The advent of cryptographically-relevant quantum computers (CRQCs) threatens to collapse the mathematical foundations of modern public-key cryptography. This research analyzes the existential threat posed by Shor's algorithm, which transforms exponential-time factoring and discrete logarithm problems into polynomial-time computations—rendering RSA, ECC, and Diffie-Hellman cryptographically obsolete.

The study examines three critical dimensions: (1) Technical vulnerability analysis explaining how Shor's algorithm mathematically breaks current cryptosystems and why increasing key sizes provides no defense, (2) The "Harvest Now, Decrypt Later" threat, where adversaries collect encrypted data today for future quantum decryption—meaning data encrypted in 2024 faces retroactive compromise in 2035-2045, and (3) Post-quantum cryptographic solutions, evaluating NIST-standardized lattice-based algorithms (CRYSTALS-Kyber, CRYSTALS-Dilithium) and their deployment readiness.

Drawing on 42 academic papers, NIST standards documentation, NSA policy frameworks, and quantum computing resource estimates, this paper demonstrates that quantum-safe migration is a decade-long socio-technical challenge requiring immediate action. Timeline projections suggest CRQCs capable of breaking RSA-2048 will emerge between 2035-2050, but data with long confidentiality requirements (government secrets, medical records, intellectual property) is already at risk.

Central Conclusion: The quantum threat is mathematically certain, temporally urgent, and infrastructurally complex. Organizations must begin hybrid cryptography deployment now—combining classical and post-quantum algorithms—to build cryptographically agile systems capable of defending against both current and future adversaries. The window for action is narrowing: data encrypted today is already compromised for tomorrow's quantum era.

Seven Foundational Insights

1. Mathematics is Destiny

The vulnerability is not a software bug to be patched—it's a mathematical fact. Shor's algorithm proves that quantum computers will break RSA and ECC with mathematical certainty. No amount of key size increases can prevent this collapse.

2. Efficiency Against One Adversary Can Be Vulnerability Against Another

ECC's brilliance—achieving RSA-3072 security with 256-bit keys—made it the gold standard of modern cryptography. Against quantum computers, this efficiency vanishes entirely. The lesson: optimizing for today's threat model can create catastrophic liability against tomorrow's adversary.

3. The Attack is Happening Now

Adversaries don't need quantum computers to exist today to compromise data encrypted today. Patient state actors are harvesting encrypted traffic now, storing it cheaply, and waiting for CRQCs to decrypt 10-20 years of "secure" communications retroactively.

4. Migration Takes Longer Than the Threat Timeline

Global migration to post-quantum cryptography requires 10-15 years even with full executive support. Organizations starting today will barely finish before CRQCs arrive. Organizations waiting for quantum computers to exist have already failed—the data was harvested years earlier.

5. Lattices Offer Hope, Not Certainty

Learning With Errors (LWE) and lattice problems appear resistant to quantum attacks—no efficient quantum algorithm exists and 40+ years of cryptanalysis have failed to break hardness assumptions. But "appears resistant" is not "provably secure." We're betting global security on lattice hardness with only 15 years of intensive study.

6. The Harvest-Decrypt Window Threatens Data Already Encrypted

Any data requiring confidentiality beyond the quantum timeline (15-30 years) is already compromised by harvest-now-decrypt-later attacks. Medical records (70-year sensitivity), government secrets (50-year classification), and corporate IP (15-year competitive value) encrypted today face retroactive exposure.

7. This is an Infrastructure Challenge, Not Just a Technical Problem

Post-quantum cryptography is mathematically solved (Kyber works!). The bottleneck is deployment: $85-180 billion global migration cost, 30-50x larger keys impacting bandwidth and performance, organizational inertia, and lack of executive awareness. The mathematics is ready. Society is not.

Topics Covered

1. Quantum Threat to Current Cryptography

Analysis of how Shor's algorithm breaks RSA, ECC, and other public-key systems that secure internet communications, financial transactions, and classified government data.

2. "Harvest Now, Decrypt Later" Attack Vectors

Examining adversary strategies to collect encrypted data today for future quantum decryption—the timeline collapse between data collection and exploitation.

3. Post-Quantum Cryptographic Standards

Evaluation of NIST-standardized post-quantum algorithms (lattice-based, hash-based, code-based) and their deployment readiness across critical systems.

4. National Security & Geopolitical Implications

Impact on intelligence gathering, military communications, diplomatic secrets, and the shifting balance of power in quantum-enabled cyber warfare.

5. Critical Infrastructure Resilience

Vulnerability assessment of financial systems, power grids, healthcare networks, and supply chains—and migration strategies to quantum-resistant architectures.

6. Quantum Computing Timeline & Threat Urgency

Assessing when cryptographically-relevant quantum computers (CRQCs) will emerge and mapping data sensitivity lifecycles against this timeline.

7. Economic & Implementation Challenges

Cost-benefit analysis of quantum-resistant migration, backward compatibility requirements, and resource constraints facing governments and enterprises.

Complete research paper with Abstract, Seven Foundational Insights, technical analysis, and comprehensive bibliography (42+ sources)

Download Full Paper (PDF)
Approach

Research Philosophy

My research is driven by a belief that emerging technologies must serve humanity, not the other way around. Whether exploring AI's role in education or preparing for quantum threats to our privacy, I approach each question with equal parts optimism and caution—excited by possibility, grounded in evidence, and committed to building systems that enhance human flourishing while protecting our fundamental rights.

Discuss Research Opportunities →