This paper introduces a novel offline approach to power control in wireless networks using a multi-agent reinforcement learning (MARL) framework. We develop a multi-agent decision transformer method to optimize performance metrics including sum-rate or packet delay. In this distributed method, each agent controls an individual link and determines its power level based on its own measurements and information exchange with a few agents within a limited neighborhood.Numerical results demonstrate that the proposed method achieves quality of service performance comparable to centralized methods using global information, for both sum-rate maximization and traffic-driven packet delay minimization problems. As an offline learning solution, it can efficiently leverage knowledge from existing mature techniques and offers significant advantages in the safety, stability, and convergence rate over existing online methods. This work provides a promising alternative for learning-based resource management in wireless networks.