Computing applications used in semiconductor design and manufacturing have ever-increasing requirements for speed, accuracy and reliability as the leading-edge nodes – fueled by high-performance computing and deep learning – enter the 5-nm era. These applications include inverse lithography technology (ILT) to produce curvilinear shapes on photomasks, mask process correction (MPC) for multi-beam mask writing to process these incredibly complex mask shapes, curvilinear mask and wafer simulation and verification, and deep learning for photomask and semiconductor manufacturing. D2S software applications are based on NVIDIA CUDA, a parallel computing platform and programming model for GPUs. The newest-generation CDP with GPU acceleration from D2S offers 1.8 PFLOPS (SP) of computing power in a one-rack CDP – enabling simulation-based accurate manipulation and analysis, particularly for curvilinear shapes, which are not possible with CPU-only applications.
From creating and processing complex mask shapes to helping to write the masks and analyzing mask SEM data to providing deep learning engines, D2S GPU-accelerated solutions help customers to achieve manufacturing success on their leading-edge mask and chip designs.
Features
- Scalable processing solution for simulation-based semiconductor design and manufacturing applications
- High speed, accuracy and reliability required for 24×7 cleanroom production environments
- Featuring NVIDIA Ampere architecture-based A40 GPUs
- Achieves more than 1,800,000,000,000,000 floating point operations per second (1.8 PFLOPS) of single precision (SP) processing speed per rack
- Algorithms redesigned from the ground up to be single-instruction-multiple-data (SIMD), and co-designed with the CDP hardware to take full advantage of GPU acceleration
Benefits
- Makes full-chip, curvilinear ILT masks a practical production reality
- Offers software-based differentiation for semiconductor manufacturing equipment
- Enables real-time, inline processing of image-based data
- Comprises an off-the-shelf solution: custom ASICs and FPGAs are not required
- Enables effective deep learning integration both for training and inferencing