You can filter the projects to see me as:
In collaboration with Dr. JC Boucher, from political sciences department, I started working on this tweet analysis project. I experimented with the state of the art and state of practice NLP algorithms to understand tweets and to categorize them based on how they are framing their issues about a particular subject. GloVe, word mover distance measure, Spacy, and short-text clustering techniques to name a few. This will be used to monitor how campaigns are performing as well as which political views and narratives are more prevalent in Canadian communities. For example Alberta Health Services is launching an HPV vaccination campaign and they want to modify and optimize it based on how the public, proponents and opponents, are talking about it.
I developed a test generator, test executor, and flight data recorder tool that works with Paparazzi autopilot. I created this tool as part of my thesis, as I needed realistic flight telemetry data from Paparazzi autopilot but there were not many system tests available in the project itself.
I implemented and compared several deep neural network architectures to see how well they can detect phishing web pages. They were trained and evaluated on a combination of several datasets.
In this project, I experimented with embeddings (word level and character level), RNNs using GRU cells, and convolutions. The data came from multiple sources, so I developed a simple pipeline to merge the data sets and perform the cleaning on the go. Keras's functional API played an important role in these architectures' implementation.
More details are available in the repository's readme and the 1-page extended abstract.
This is a project I did as my MSc project for MicroPilot Inc, to generate a state model of their UAV auto-pilot software for testing and verification. I created a deep neural network a hybrid of convolutional and recurrent layers, that takes as input the values of auto-pilot's inputs and outputs (sensor readings and servo outputs) and predicts the state of the system. Although I had access to the 500k LOC code base of the auto-pilot, I decided to develop this method as a more generalizable and more useful black-box method where access to the internals is not assumed.
This project was quite a journey, a hands-on learning opportunity for me. Data collection, data cleaning, and storage made a major part of the project. It was an industry collaboration where standardized datasets are not available. The architecture was inspired by several similar tasks such as human activity recognition and image segmentation. The evaluation part was also a bit challenging and I had to implement a variation of precision, recall, and F1 Score.
On the implementation side, I had a lot to learn too. I learned how to compile TensorFlow to use GPU on my machine. I used a pipeline of python generators fed into TensorFlow's DataSet API to optimize for best memory utilization and computational performance. I created a custom layer in Keras to properly handle masking of the padding zeros. I also learned to create custom loss functions and custom validation metrics.
A conference paper with more details on this project's implementation and results has been published in ASE '20 conference, you can read it for free on sea-lab's website. I further extended this project with hyper-parameter optimization, replicating it on Paparazzi autopilot (check out Paparzzi Tester prject above as well), and applying transfer learning to reduce training overhead. Details are available in my thesis, which you can find here.
iHealth Card is a persoanl card-shaped USB stick that keeps the medical records of its owner in a distributed and encrypted manner. It is intended as an interim cost effective and scalable solution that is independent from the internet infrastructure targeted for use in developing countries. It is accompanied with an android app for healthcare workers to access the data on the card. Karndeep and I were developing the Android app using java and Kotlin.
I trained an LSTM based recurrent neural network with character embeddings on the corpus of my own tweets to generate new tweets in my writing style.
This project started in March 2019 in the Neuro Nexus hackathon. I was in the team until May that year when the project finished, afterwards, I decided to leave the team for several reasons. We created a tool to assist laboratories and clinicians with translating pharmacogenomic testing results into clinically useful recommendations. The recommendations are based on expert guidelines developed by the Clinical Pharmacogenetics Implementation Consortium (CPIC) and the Royal Dutch Association for the Advancements of Pharmacy - Pharmacogenetics Working Group (DPWG).
This project was my first Canadian leadership experience which boosted my confidence to work in a professional international environment. In the first meeting we had, I took the initiative, using my experience with Nivad Cloud, to lead a brainstorming session to come up with a solution, set the goals and milestones, and assign tasks to team members. We decided to create an online tool with an intuitive and simple interface for clinicians and laboratories to use. Masoud and Patrick designed and implemented the front-end and the report structure with input from the product owner (also called challenge champion), Dr. Chad Bousman. Patrick also designed the logo, pamphlets, the poster, and the demo day presentation given his designing experience. Nima and I created the backend using Python. I deployed it as a lambda function with a serverless architecture on AWS. I taught Nicolas and Elzanne some basics of working with Amazon RDS service as they were in charge of cleaning and transforming the data from the sources into a relational model for the backend to use.
I created a positive atmosphere and a balance during the development of the project where every one felt useful, always having something meaningful to do while not being overwhelmed by the amount of work left. I made sure that everyone had freedom of decision making in their sub-team while having clear boundaries for who should do what. We managed to create a complete working prototype while perfectly meeting the deadline with not much pressure. On the last night before the demo day, I called for a last brainstoriming meeting to plan the presentation. We summarized what is the problem we are solving and to reflect and enumerate the most important strengths of our solution. Patrick and Elzanne did a fantastic job of presenting the project to the judges and the general public on the demo day.
What I took from doing this project was much more about leadership and soft skills in general, though I experimented and learned technical skills such as AWS as well.
P.S. I just noticed that the names of three team members (Nima, Masoud, and me) are removed from the development team list, basically not giving us credit for what we did for the project. It is quite immoral, isn't it?
I co-founded Nivad Cloud startup as a BaaS solution offering secure in-app purchases as its first and most successful and innovative service. I learned and practiced a huge deal of soft skills while managing this project ranging from negotiations, to conflict resolution, to making engaging presentations, to customers, to basic financial management, and how to wear multiple hats but not too many. We sponsored PyCon in Tehran as well as a nation-wide hackathon which I did the negotiations for. Also, I made great progress in my technical skills as well. I designed and developed the RESTful API up to best practices and the industry standards, created the front-end and backend of the management dashboard, designed the database, and deployed the service on the servers. I had an amazing journey from starting a hobby side project to growing it into a successful startup serving thousands of customers in production.
From a technical view, I had an amazing learning experience in a variety of backend development and maintenance as well. I learned how to set up an Nginx reverse proxy, keep services alive on docker, and how to manage SSL certificates. I had to separate API and the users' dashboard separate to achieve higher availability and much higher performance for the API. I learned how to properly use redis key-value store to lower the overheads on API calls to make them as fast as possible without losing any accounting data. I also used it to cache the results of sluggish queries. I learned how to leverage Celery to run computationally intensive tasks asynchronously as well as running periodic tasks such as issuing invoices and processing transactions at the end of the day.
This is one of the most influential projects on my professional path. I gained plenty of technical skills and soft skills. But more importantly, I expanded my network, came out of my bubble and communicated with people of quite different backgrounds (as customers, investors, job applicants, etc). I developed an entrepreneurial/business vision which even helped me to be successful in the stock market today. Overall, despite all the mistakes and shortcomings, I believe that this project made me stronger and wiser.
My BSc. capstone project was to collaborate with Sahar, my supervisor's MSc. student at the time, in crowdsourcing a data collection task. I developed a website that users could use for data labeling and peer-verification of the labels. The platform is equipped with multiple gamification elements. It rewards the participants who had submitted more verified and correct labels with scores, award them achievement badges, and creates a competetive environment by showing them a leaderboard. I deployed this project on openshift and created a step-by-step tutorial to be used later as my supervisor asked me.
The website was live for a limited time and served labeled thousands of data, with peer verification, by hundreds of users. Top 10 participants were later rewarded (in real world, not with game scores ) for their help.
Mini Google is a search engine for academic papers. It crawls researchgate.com for academic papers and indexes the title, author, and the abstract on an Elastic search back end. Mohammad Hossein and I implemented this as the 3rd project for the modern information retrieval course. We performed clustering based on co-authorship, calculated page rank based on citations and showed them on the results page.
As the second project for modern information retrieval course, I evaluated the performance of two supervised binary classification algorithms on the IMDb movie reviews dataset with a bag of words model. The classification algorithms are KNN and Naïve Bayes. In addition to that, I did the same on the MEDLINE medical prescriptions dataset as a multi-class classification task. Both were evaluated and compared using F1 score and area under ROC curve. Data cleaning, tokenization, vectorization, stemming, and the classification algorithms (KNN and Naive Bayes) were all implemented from scratch as a requirement for the course.
As the first course project, I implemented a inverse indexing and query engine. Tokenization, stemming, vectorization with TF-IDF model, and processing queries were all implemented in Java from scratch.
I implemented all the socket and networking code and parts of the game UI in this project in Java. We created a multi-agent game for ACM ICPC participants as an extra contest they could optionally sumbit codes to. In Gold Hunters game two groups of gold seeking agents explore a map in search of hidden treasures and might occasionally engage in a fight. We developed two versions with slightly different objectives as well as improvements for two rounds of the regional contest in 2014 and 2015.