Quantum Machine Learning; Practical Object Detector
#CellStratAILab #disrupt4.0 #WeCreateAISuperstars
CellStrat is India’s No 1 open AI Lab with more than 120 AI researchers and interns engaged in advanced AI skilling and research.
The first AI Lab meetup in this New Year 2020 saw amazing demos and presentations by our AI Lab members.
Quantum Machine Learning (Quantum Computing) :-
First Niraj Kale launched our Quantum Computing research group with a superb introduction to Quantum Machine Learning. Quantum Computing is a new exciting field which can solve many of the erstwhile computing problems as well as speed up existing AI and ML solutions dramatically.
Traditional computing uses Turing Machine or Lambda Calculus which represent the memory in the “classical” way.
A quantum computation represents memory in quantum superposition of the possible classical states. To do this, it relies on quantum bits or “qubits”. E.g. in classical computing, we have logic gates in electronic circuits such as AND, OR or NOR. Here memory unit is 0 (OFF) or 1 (ON) [or TRUE or FALSE].
A “p-bit” represents the bit in terms of probability of 0 or 1. A “qubit” in a quantum state has a “superposition” of 0 and 1. Superposition means that bit can be simultaneously 0 or 1 and can in fact be in multiple states at the same time.
The second major characteristic of a quantum bit is the fact that qubits follow “Entanglement”. This means that states of entangled qubits cannot be described independently of each other.
Representing multiple qubits means exponential increase in the number of parameters.
Quantum Computing has its own set of Gates as shown below.
Also the Quantum Circuits are reversible (as opposed to classical circuits e.g. an AND gate goes one way) – wherein we can move from input to output and reverse also.
A Quantum circuit may look like the below. The X, Y and T here are the Quantum Gates shown above.
There are various algorithms in Quantum Computing such as Deutsch–Jozsa algorithm, Bernstein–Vazirani algorithm, Simon’s algorithm, Grover’s algorithm and Quantum Counting. We will cover these in future research notes.
Quantum Computing can be used to exponentially speed up the computing power particularly in areas like ML and DL. Data, which is available as classical data format can be converted into Quantum data, processed with Quantum ML, and then converted back to classical data.
One can develop Quantum Neural Networks. In the image below, we show a Quantum Neural Network for classification. “Here we depict a sample quantum neural network, where in contrast to hidden layers in classical deep neural networks, the boxes represent entangling actions, or “quantum gates”, on qubits. In a superconducting qubit setup this could be enacted through a microwave control pulse corresponding to each box.” (Courtesy: https://ai.googleblog.com/2018/12/exploring-quantum-neural-networks.html).
Another key concept of Quantum Computing is the Quantum Annealing. In a traditional rugged cost/energy landscape (cost along y axis, configuration along x), the optimization logic or the thermal jump has to be over the barrier – but the quantum tunneling can jump right through; thus traversing becomes more efficient particularly when the cost barriers are tall but thin.
Finally Niraj presented some demos related to Quantum ML viz. Vertex Cover, Constrained Scheduling, QisKIT QSVM classifier and qGAN.
The real-world application for this example might be a network provider’s routers interconnected by fiberoptic cables or traffic lights in a city’s intersections. It is posed as a graph problem; here, the five-node star graph shown below. Intuitively, the solution to this small example is obvious — the minimum set of vertices that touch all edges is node 0, but the general problem of finding such a set is NP hard.
Practical Object Detector (build Object Detector from scratch) :-
Shreeyash Pawar presented a detailed and extensive seminar on how to build a real-time object detector from scratch. Shreeyash started with a CNN recap. As we know, CNNs specialize in image processing applications.
Object Localization involves not only detecting the object but also creating bounding box and identifying the object.
The Region CNNs and their various versions such as Fast RCNN, Faster RCNN and Mask RCNN use the general principle of extracting region proposals and then classifying and bounding-box regression at the region level.
Then there are early stage detectors such as YOLO v3 and SSD. These skip the region proposal stage and runs detection directly over a dense sampling
of possible locations. These are faster and simpler, but can potentially reduce performance.
SSD has several grids of different sizes. The MobileNet+SSD version has 6 grids with sizes 19×19, 10×10, 5×5, 3×3, 2×2, and 1×1. SSD does not split the image into grids but predicts offset of predefined default boxes(anchors) for every location of the feature map.
Then there are plenty or pre-trained models such as ResNet, VGGNet, GoogleNet, Mobilenet etc. These act as feature extractor from a large set of pre-sampled images. Feature extractors impact both accuracy and speed. ResNet and Inception are often used if accuracy is more important than speed. MobileNet provides lightweight detector with SSD.
How do we measure the accuracy of our object detection models. We have various metrices such as :-
1) IoU (Intersection over Union) – A detection is a true positive if it has “intersection over union” (IoU) with a ground-truth box greater than some threshold
2) mAP – (Mean Average Precision) – Combine all detections from all test images to draw a precision-recall curve (PR curve) for each class; The “average precision” (AP) is the area under the PR curve.
3) Speed – time taken for single forward pass
In general, Faster RCNN is more accurate while R-FCN and SSD are faster.
The model development process has these steps :-
- defining problem statement including environment, hardware and data
- determining the tools (e.g. TensorFlow). Determine the overall pipeline
- collect data
- may need to do data augmentation. For realistic data, may need to take new original images.
- data pre-processing including resizing, split into train/test, labelling, create CSV files
- generate TFrecords (only in TF model architecture)
- training (may use transfer learning as well)
- optionally observe the model run with TensorBoard
- post training, convert model (TF model is called inference graph) to model file (.pb file)
- use model for inference
- optionally can use model compression techniques such as MorphNet, OpenVINO (Intel hardware), ArmNN (ARM chips) or TensorRT (Nvidia chip GPUs).
Shreeyash then presented some lessons learnt, which are very intuitive :-
- Complex feature extractors like ResNet and Inception-ResNet are highly accurate if speed is not a concern.
- Experiment different feature extractors to find a good balance between speed and accuracy. Some light weight extractors make significant speed improvement with tolerable accuracy drop
- At the cost of speed, higher resolution input images improves accuracy, in particular for small objects
- Fewer proposals for Faster R-CNN can improve speed without too much accuracy drop.(https://arxiv.org/pdf/1611.10012.pdf)
- Single shot detectors tend to have problems for objects that are too close or too small.
- For large objects, SSD performs pretty well even with a simpler extractor. SSD can even match other detector accuracies with better extractor. But SSD performs much worse on small objects compared to other methods.
- Feature extractor is significant speed bottleneck.
Our AI Lab researchers are true AI superstars and are taking our AI Lab to world class level.
Interested in exploring our AI Lab as well as world-class AI ML skilling programs ? If yes, attend our AI Lab this Saturday in BLR and participate in this amazing NLP workshop. Please RSVP below :-
BLR AI Lab meetup :-
Register : https://www.meetup.com/Disrupt-4-0/events/qqmxlrybccbpb/
Topic : Hands-on Workshop Text-to-Speech using Tacotron
Date : Saturday 11th Jan 2020, 10:30 AM – 5 PM
Presenters : Indrajit Singh
See you this weekend for the AI Lab workshop ! Let’s disrupt the world with AI, together !
Questions ? Call me at +91-9742800566 !
Co-Founder & Chief Data Scientist, CellStrat