|
Abstract
|
This paper proposes a new approach where the Q-learning, which is one of the Reinforcement Learning (RL) techniques, is integrated into the Fuzzy Linear Programming (FLP) paradigm for improving peer selection in P2P network. By using Q-learning, the proposed method employs real-time feedback for adjusting and updating the peer selection strategies in real-time. The FLP framework enriches this process by maintaining imprecise information by the use of the fuzzy logic. It is used for achieving multiple objectives such as to enhance the throughput rate, reduce the delay time and guarantee reliable connection. This integration effectively solves the problems of network uncertainty, making the network configuration more stable and flexible. It is also important to note that throughout the use of the Q-learning agent in the network, various state metric indicators including available bandwidth, latency, package drop rates, and connectivity of nodes are observed and recorded. It then selects actions by choosing optimal peers for each node, updating a Q-table that defines states and actions based on these performance indices. This reward system guides the agents learning, refining its peer selection strategy over time. The FLP framework supports the Q-learning agent by providing optimized solutions that balance competing objectives under uncertain conditions. Fuzzy parameters capture variability in network metrics, and the FLP model solves a fuzzy linear programming problem, offering guidelines for the Q-learning agents decisions. Experimental results prove the effectiveness of method. Simulation using Erdos-Renyi model shows throughput increased by 21% and latency decreased by 17%. The computational efficiency was also notably improved, with computation times diminished by up to five orders of magnitude with compared to traditional methods.
Keywords:
|
|
Keywords
|
Erdos-Renyi model, Fuzzy Linear Programming, Q-learning, P2P network, Q-table, Reinforcement Learning
|