This study employs Density Functional Perturbation Theory (DFPT) to elucidate the influence of biaxial strain on the optical Raman peaks and electronic band gap of monolayer WS2. By analyzing phonon and electronic dispersions, this work establishes a direct correlation between induced strain and the modifications that arise in the band gap and optical phonon modes, providing a strategic framework for tailoring material properties. The computational approach involves varying the a and b lattice parameters from -2% to 2% in 0.5% increments. Self-consistent field (SCF) and phonon calculations are conducted to derive phonon frequencies and construct band structures along the lattice’s high-symmetry paths. Using a 2x2x1 WS2 supercell, the findings indicate that biaxial strain reduces the direct band gap at the K-point of the Brillouin Zone. Furthermore, the A1g and E2g Raman modes exhibit pronounced blue and red shifts, respectively, under tensile strain. These results deliver nuanced insights into the strain-dependent electronic and vibrational behaviors of WS2, paving the way for the development of next-generation optoelectronic and photonic devices with highly tunable characteristics.
Hyperspectral optical microscopy is promising for rapid, non-destructive assessment of two-dimensional (2D) semiconductor quality, but conventional interpretation based on curve-fitting to excitonic or other spectral features is slow, model-dependent, and unreliable in noisy or sparse data. Here, we present a machine learning framework that links spectra to material properties by learning the relationship between spectral signatures and point defect density, eliminating the need for explicit line-shape modeling. Trained on differential reflectance spectra of ion-irradiated monolayer WS2, a conditional Wasserstein GAN (c-WGAN) generates realistic class-conditioned spectra that reproduce dose-dependent excitonic peak broadening and contrast loss, achieving a mean cosine distance of 0.09 from experiment and enabling smooth interpolation across defect-density regimes. A convolutional neural network trained on just real data and pre-trained on synthetic data/fine-tuned on real data predicts ion-dose-encoded defect density with a 91% accuracy and 95% accuracy respectively. The c-WGAN also allows us to interpolate between latent space representations of our data while faithfully reproducing high fidelity samples that seamlessly fall into a regression pipeline (R2 = 0.9987). These results show that ML-enabled analysis of hyperspectral optical data can bypass fitting-based workflows, reveal latent structure–property relationships, and provide rapid, non-destructive defect quantification to accelerate synthesis and optimization of 2D semiconductors.
Manuscript Coming Soon!
I took part in a research fellowship funded by the National Science Foundation in which I was given the opportunity to work in the Materials Data Science field. Under the supervision of one of my professors, I worked on a computer vision project in which I was tasked with using Instance Segmentation to improve on current methods of detecting small, spherical particles in Scanning Electron Microscope images of a gas-atomized nickel superalloy powder. In order to label the ground truth images, I used VGG Image Annotator to manually map out thousands of particle instances across 5 images. I then trained the model using a Meta AI implementation of Mask R-CNN to achieve a 6% increase on detection recall. I presented my findings at an undergraduate research symposium to other students and researchers in the Materials Data Science field.
Electronic and optoelectronic applications of two-dimensional (2D) semiconductors demand precise control over material quality, including thickness, composition, doping, and defect density. Conventional benchmarking methods (e.g. charge transport, confocal mapping, electron or scanning probe microscopy) are slow, perturb sample quality, or involve trade-offs between speed, resolution, and scan area. To accelerate assessment of 2D semiconductors, we demonstrate a broadband, wide-field hyperspectral optical microscope for 2D materials that rapidly captures a spatial-spectral data cube within seconds. The data cube includes x–y spatial coordinate (a 300 × 300 field, with ∼1 µm resolution) and a selectable wavelength range between 1100 and 200 nm at each pixel. Using synthesized films and heterostructures of transition metal dichalcogenides (MoS2, WS2, VxW1−xS2, and WSe2), we show that this cost-effective technique detects spectral fingerprints of material identity, doping, grain boundaries, and alloy composition, and enables advanced analysis, including unsupervised machine learning for spatial segmentation.
Related Paper: https://arxiv.org/abs/2506.18342
Physicians are required to be almost 100% accurate during their jobs, especially when it comes to making final diagnoses. When you have something as serious as a cancer diagnosis, physicians are often under a lot of pressure to give the correct diagnosis, sometimes with very limited time and information. Breast cancer is the most common disease affecting women in many parts of the world, which has led to efforts to be able to detect tumors in the breast as early as possible. In many sub-Saharan African countries, we see issues associated with the technology used for classification of these tumors. While breast cancer is more common is developed countries (> 80 cases per 100,000 people) than it is in developing countries (< 30 cases per 100,000 people), the mortality rates in developed countries is much lower because of advanced research and technology. My approach to this problem is to train a neural network to be able to classify these tumors with very limited data, as if we were trying to simulate the conditions of a physician working with very little data themselves. The dataset that I am working with only contains 9 parameters given a score from 1-10 based on the severity. In a real setting, there are several different variables that can be generated from studying a tumor and using only 9 has shown yield very accurate results. Most research in tumor classification uses images of the tumors as part of the model; however, for my research I wanted to focus more on numerical data, as I believe this will be a better simulation of having limited resources.