top of page

Deconstructing AI: Skye’s Mission to Make Intelligence Inclusive

  • The true power of research lies not just in complexity, but in how clearly it can be understood—so I always aim to explain my work both as a scholar and as a storyteller.
     

  • Breakthroughs in AI mean little if we can’t explain them simply, so I share my research in a way that speaks to both academic peers and curious young minds.
     

  • To drive real impact, research must speak two languages: the technical depth of scholars and the clear, simple voice that invites everyone to understand.
     

  • I believe in making the complex accessible, so alongside my academic approach, I offer a simplified version of my research to help others, especially young learners, see themselves in the story of AI.
     

  • Because understanding AI shouldn't be limited to experts, I present my research in both its scholarly form and in language anyone, especially kids, can connect with.
     

 

Here’s a brief translation of my scholarly AI journey into a storytelling format that brings my Ph.D. research to life.

​

My PhD Journey: The Scholar’s Lens


Research Overview: Advancing Equity through AI: Detecting and Mitigating Human Bias in Critical Systems

Human decision-making is often influenced by unconscious bias, leading to unintended disparities, particularly in systems like higher education and financial services. My PhD research addresses this challenge by exploring how artificial intelligence (AI), particularly machine learning (ML), can be leveraged to detect, unpack, and ultimately reduce the impact of human bias.
 

The research investigates the use of multiple ML algorithms and analytical techniques to uncover patterns of inequity, evaluate systemic disparities, and inform more equitable practices. The focus is on practical applications within two key domains: higher education and financial inclusion.
 

Methodology and Technical Approach

The study incorporates a variety of technical approaches and tools:
 

  • BERTopic (Python): Applied for unsupervised topic modeling, utilizing sentence transformers, Uniform Manifold Approximation and Projection (UMAP), and Incremental Principal Component Analysis (IPCA) for dimensionality reduction.
     

  • Gradient Boosting: Used for both regression and classification tasks to build and fine-tune predictive models.
     

  • Data Preprocessing Techniques: Including cleansing, scaling, splitting, rescaling, and re-encoding to ensure data quality and integrity.
     

  • Model Training and Evaluation: Comparison of various ML algorithms—linear regression, logistic regression, k-nearest neighbors (KNN), multinomial Naïve Bayes, and gradient boosting—based on performance metrics such as:

    • Area Under the Curve (AUC)

    • Statistical Significance Testing

    • Hyperparameter Optimization

    • Model Validation and Cross-Validation Techniques
       

  • Visualization Tools: Such as the Intertopic Distance Map to interpret and communicate model outputs.
     

Significance and Impact

This research is increasingly relevant as organizations and institutions navigate how to embed fairness into AI systems. Its importance lies in:
 

  • Informing Policy: The models developed can support institutions in crafting data-informed, equity-driven policy interventions.
     

  • Operationalizing Equity: By identifying systemic disparities in admissions, lending, or service access, the research helps move from awareness to action.
     

  • Enabling Accountability: Creating frameworks for ongoing monitoring and assessment of bias within algorithmic systems.
     

  • Scaling Inclusion: Providing scalable, repeatable models that can be adapted across sectors and geographies to ensure fairer outcomes for historically marginalized groups.
     

Ultimately, this work contributes to a growing field of responsible AI, bridging technical rigor with ethical imperatives. It aims to support institutions in designing not only smarter, but fairer systems, where human and machine decision-making align toward inclusive progress.

​

​

My PhD Journey: The Story that Brings it to Life


Skye’s Mission: Can AI Help Make Life More Fair?

Have you ever raised your hand in class, but the teacher kept calling on the same few kids every time? Or noticed that some students always get extra help while others are overlooked? That might feel a little unfair—and sometimes, it actually is. And sometimes people make decisions without realizing they’re being unfair. This is called bias, and it happens when people or systems make choices that aren’t balanced.
 

That’s where Skye’s big mission comes in: using AI (Artificial Intelligence) to help spot these unfair patterns and find better ways to include everyone.
 

As part of Skye’s journey, we’re exploring how smart computer systems can help us spot unfairness in places like:
 

  • Schools and colleges (Who gets a scholarship vs. who doesn't, and why?)
     

  • Money and banks (Who gets loans? Who gets help?)
     

We’re teaching computers how to look at large sets of data (like digital clues) and find signs that something might not be equal.
 

How Skye Does It (With a Little Help from Machine Learning!)

To help computers understand patterns, we use tools like:
 

  • Topic explorers: These help AI understand what people are talking about in essays, reports, or messages.

  • Smart prediction tools: These help AI guess what might happen next based on what it learns.

  • Clean-up crews for data: We fix messy or confusing data to help the computer learn better.

  • Compare-and-choose tests: We try different learning methods to see which one is the fairest and most helpful.
     

We also look at how well the AI is doing, kind of like giving it a report card:
Did it guess fairly? Did it leave anyone out? Can we make it better?

 

Why It Matters for the Future

Skye’s research is important because:
 

  • It helps adults make better decisions with the help of AI.

  • It helps schools and banks treat everyone more fairly, and can be applied in many other areas as well.

  • It shows that we can use technology to include everyone, not leave people behind.
     

In Skye’s words:

“We can’t fix what we can’t see; so let’s teach AI to see the unfairness and help us make things better.”

bottom of page