I am an AI researcher working on geometry-aware, hardware-efficient deep learning for 3D point cloud perception. My work is motivated by a concrete goal: bringing reliable 3D scene understanding to robots and autonomous vehicles that run on constrained hardware — no cloud, no server rack, no compromise on accuracy.
I am currently a Research Collaborator at TigerSec Laboratory, Clemson University, working with Dr. Amir Salarpour. I hold an M.Sc. in Artificial Intelligence & Robotics from Sirjan University of Technology (GPA 19.9/20, ranked 1st in cohort) and am actively seeking a fully-funded PhD position in Europe to extend this research toward real-time robotic scene understanding.
Research
Efficient 3D architectures — designing point cloud networks that achieve competitive accuracy at a fraction of the parameter count of standard models. My recent work includes SLNet (ICRA 2026), which matches state-of-the-art classification accuracy on ModelNet40 with 24× fewer parameters than PointMLP, and NPNet (IEEE IV 2026), which sets new benchmarks for non-parametric 3D recognition using adaptive Gaussian–Fourier positional encoding.
Geometry-aware learning — developing encodings and backbones that explicitly exploit the local geometric structure of point clouds, rather than treating 3D data as unstructured feature sets. This is the key to both accuracy and efficiency in resource-constrained settings.
Scalable 3D segmentation — extending efficient architecture principles from object-level classification to large-scale scene segmentation across real-world outdoor and indoor benchmarks (ScanNet, NuScenes, Waymo).
Current Work
I am currently developing SRFD-3D, an efficient segmentation architecture using structured compression and stage-wise relational feature distillation, and AniGeo, which introduces anisotropic RBF-based geometric encoding for lightweight 3D recognition.
I am open to collaborations on efficient 3D deep learning, point cloud perception, and edge deployment for autonomous systems. Feel free to reach out.
