Your graph still assumes that the win rate depends on the rating difference. Just as win(x,y) does not need to be an exponentially-curved function of (x-y), it may not be a function of (x-y) at all — for example, the probability that a 1400 wins against 1200 may be different than the probability that 1800 wins against 1600. Have you tried drawing a graph which takes this into account?
You say that a good player cannot lose/gain much by fighting against a noob, but what about the noob? If they can gain much by winning against a good player, it seems that the noob could pay a master to lose some games against them —the noob would have an artifically high rathing, and the master would not lose much.
I would use the MLE method to compare the different models (i.e., rating systems)… take a history of all the games, for each game in the history ask each model about the probability of winning of each player (given their history), give log(p) penalty points if an event with predicted probability p happens, and the model with the smallest penalty wins, by being the best at predicting the results. Such a model can be then used for matchmaking by match you against a player with winning probability close to 50%.