Muhammad Maaz
I am an M.Sc. Computer Vision graduate from MBZUAI, worked under the
supervision of Dr. Salman Khan
and Dr. Fahad Khan.
My interests are focused on developing multi-modal understanding from vision and text to improve common-sense reasoning of machines and its applications in open-vocabulary and open-world object detection.
I am also exploring efficient neural networks design for edge-computing devices (i.e. Jetson Nano).
I received my B.Sc. degreen in Electrical Enginerring from UET Lahore with honors in 2018.
After my graduation I joined Confiz Limited as Computer Vision engineer where I worked on design and deployment of deep-learning driven computer vision solution for retail industry.
In 2022, I graduated from MBZUAI with an M.Sc. degree in Computer Vision.
Email  / 
CV  / 
Google Scholar  / 
GitHub  / 
LinkedIn
|
|
Research and Publications
* denotes equal contribution co-authorship
|
|
Class-agnostic Object Detection with Multi-modal Transformer
Muhammad Maaz*,
Hanoona Rasheed*,
Salman Khan,
Fahad Shahbaz Khan,
Rao Muhammad Anwer
Ming-Hsuan Yang
ECCV, 2022
project page
/
arXiv
/
video
In this work, we explore the potential of the recent Multi-modal Vision Transformers (MViTs) for class-agnostic object detection. Our extensive experiments across various domains and novel objects show the state-of-the-art performance of MViTs to localize generic objects in images. We also develop an efficient and flexible MViT architecture using multi-scale feature processing and deformable self-attention that can adaptively generate proposals given a specific language query.
|
|
Bridging the Gap between Object and Image-level Representations for Open-Vocabulary Detection
Hanoona Rasheed*,
Muhammad Maaz*,
Muhammad Uzair Khattak
Salman Khan,
Fahad Shahbaz Khan,
NeurIPS, 2022
project page
/
arXiv
/
video
In this work, we propose to solve the Open-vocabulary detection (OVD) problem using pretrained CLIP model,
adapting it for object-centric local regions using region-based distillation and image-level weak supervision.
Specifically, we propose to utilize high-quality class-agnostic and class-specific object proposals via the pretrained
mulit-modal vision transformers (MViT). The class-agnostic proposals are used to distill region-specific
information from CLIP and class-specific proposals allows us to visually ground large vocabularies. We also
introduce a region-conditioned weight transfer method to get complementary benefits from both region-based
distillation and image-level supervision.
|
|
EdgeNeXt: Efficiently Amalgamated CNN-Transformer Architecture for Mobile Vision Applications
Muhammad Maaz*,
Abdelrahman Shaker,*,
Hisham Cholakkal,
Salman Khan,
Syed Waqas Zamir,
Rao Muhammad Anwer,
Fahad Shahbaz Khan,
CADL (ECCVW), 2022
project page
/
arXiv
/
video
In this work, we designed resource-efficient general purpose backbone network for vision tasks. We combine
the strengths of both CNN and Transformer models and propose a new efficient hybrid architecture EdgeNeXt.
Specifically in EdgeNeXt, we introduce split depth-wise transpose attention (SDTA) encoder that splits input
tensors into multiple channel groups and utilizes depth-wise convolution along with self-attention across
channel dimensions to implicitly increase the receptive field and encode multi-scale features. Our extensive
experiments on classification, detection and segmentation tasks, reveal the merits of the proposed approach,
outperforming state-of-the-art methods with comparatively lower compute requirements.
|
|
UNETR++: Delving into Efficient and Accurate 3D Medical Image Segmentation
Abdelrahman Shaker,
Muhammad Maaz,
Hanoona Rasheed,
Salman Khan,
Ming-Hsuan Yang
Fahad Shahbaz Khan,
Under Review, 2022
project page
/
arXiv
In this work, we propose a 3D medical image segmentation approach, named UNETR++, that offers both
high-quality segmentation masks as well as efficiency in terms of parameters and compute cost. The core
of our design is the introduction of a novel efficient paired attention (EPA) block that efficiently learns
spatial and channel-wise discriminative features using a pair of inter-dependent branches based on spatial and
channel attention. Our spatial attention formulation is efficient having linear complexity with respect to the
input sequence length. To enable communication between spatial and channel-focused branches, we share the
weights of query and key mapping functions that provide a complimentary benefit (paired attention), while
also reducing the overall network parameters.
|
|
Fine-tuned CLIP Models are Efficient Video Learners
Hanoona Rasheed,
Muhammad Uzair Khattak,
Muhammad Maaz,
Salman Khan,
Fahad Shahbaz Khan,
Under Review, 2022
project page
/
arXiv
In this work, we show that a simple Video Fine-tuned CLIP (ViFi-CLIP) baseline is generally sufficient
to bridge the domain gap from images to videos. Our qualitative analysis illustrates that the frame-level
processing from CLIP image-encoder followed by feature pooling and similarity matching with corresponding
text embeddings helps in implicitly modeling the temporal cues within ViFi-CLIP. Such fine-tuning helps the
model to focus on scene dynamics, moving objects and inter-object relationships. For low-data regimes where
full fine-tuning is not viable, we propose a ‘bridge and prompt’ approach that first uses fine-tuning to bridge
the domain gap and then learns prompts on language and vision side to adapt CLIP representations.
|
|
MaPLe: Multi-modal Prompt Learning
Muhammad Uzair Khattak,
Hanoona Rasheed,
Muhammad Maaz,
Salman Khan,
Fahad Shahbaz Khan,
Under Review, 2022
project page
/
arXiv
In this work, we propose to learn prompts in both vision and language branches of pretrained CLIP for
adapting it to different downstream tasks. Previous works only use prompting in either language or vision
branch. We note that using prompting to adapt representations in a single branch of CLIP (language or vision)
is sub-optimal since it does not allow the flexibility to dynamically adjust both representation spaces on a
downstream task. To this end, we propose Multi-modal Prompt Learning (MaPLe) for both vision and language
branches to improve alignment between the vision and language representations. Our design promotes strong
coupling between the vision-language prompts to ensure mutual synergy and discourages learning independent
uni-modal solutions.
|
You've probably seen this website template before, thanks to Jon Barron.
Last updated May 2020.
|
|