In the intelligent transportation field, the Internet of Things (IoT) is commonly applied using 3D object detection as a crucial part of Vehicle-to-Everything (V2X) cooperative perception. However, challenges arise from discrepancies in sensor configurations between vehicles and infrastructure, leading to variations in the scale and heterogeneity of point clouds. To address the performance differences caused by the generalization problem of 3D object detection models with heterogeneous LiDAR point clouds, we propose the Dual-Channel Generalization Neural Network (DCGNN), which incorporates a novel data-level downsampling and calibration module along with a cross-perspective Squeeze-and-Excitation attention mechanism for improved feature fusion. Experimental results using the DAIR-V2X dataset indicate that DCGNN outperforms detectors trained on single datasets, demonstrating significant improvements over selected baseline models.