In cellular networks, received signal strength (RSS) prediction plays an essential role in cellular network planning and deployment, as it aims to estimate the wireless signal quality that a base station (BS) can deliver to user equipment (UE) within an area of interest. While there have been extensive works on developing analytical and empirical channel models for signal coverage prediction, these models typically do not consider cell-specific environmental information such as building footprints and other types of clutters. As a result, their performance in RSS prediction can deviate from real-world deployments and measurements. In this paper, we bridge such a gap by implementing state-of-the-art RSS prediction methods based on deep learning (DL) and with evaluations using real-world RSS measurements collected from an LTE network operating in the Citizens Broadband Radio Service (CBRS) band. We also present a comprehensive comparison of the RSS prediction performance compared to analytical and empirical channel models as well as ray tracing (RT) methods. Our evaluations reveal that the existing empirical/analytical channel models and RT methods exhibit unstable RSS prediction performance depending on the environments, with maximum root mean square error (RMSE) values ranging from 7.12–12.43dB and 6.61–18.96dB, respectively. In contrast, the DL method outperforms these baseline methods, achieving a more stable performance with RMSE values ranging between 5.71–12.18 dB across the 10 PCIs, therefore demonstrating its robustness and generalizability.