Highly adaptable and self-motivated Computer Science graduate with a strong academic foundation in Chemistry. Skilled in Python, PyTorch, OpenCV, and modern machine learning frameworks. Experienced in computer vision, reinforcement learning, and NLP through hands-on projects. Excellent team player with strong collaboration and communication skills, capable of working independently and within cross-functional teams.
Role: Computer Vision Developer
Tools & Technologies: Python, OpenCV, Scikit-learn, Matplotlib, Git, SIFT, Morphological Processing, Thresholding, Region-based segmentation
Description:
Developed a computer vision pipeline to segment and identify sea turtles in underwater images using traditional image processing and feature extraction methods. Achieved high Intersection over Union (IoU) scores across various image categories under rotation, scaling, and noise.
Role: Deep Learning Engineer
Tools & Technologies: Python, PyTorch, Transformers, LSTM, GRU, BERT, Matplotlib, Scikit-learn, Git
Description:
Built and optimized deep learning models to perform question classification and answer selection in a QA system. Compared transformer-based architectures (e.g., BERT) with traditional RNNs (LSTM and GRU), analyzing performance across different tasks and data partitions. Achieved state-of-the-art results on accuracy and F1-score, and applied dropout and learning rate tuning to enhance model generalization. Presented findings through visualizations and critical evaluation of model behaviors.
Role: Machine Learning Engineer
Tools & Technologies: Python, Scikit-learn, XGBoost, SMOTE, GridSearchCV, RandomizedSearchCV, PCA, LDA,
Description:
Developed and compared machine learning models to classify customer feedback into 28 product categories using natural language processing techniques and 300-dimensional vectorized features. Implemented advanced preprocessing, feature selection (SelectKBest, L1, PCA, LDA), and class imbalance handling (SMOTE, class weights). Conducted extensive hyperparameter tuning via GridSearchCV and manual optimization to improve Macro F1 from 0.4483 to 0.4734. Final model using XGBoost achieved 0.77 accuracy and significantly improved class-level performance over logistic regression, especially for minority classes