Because of some confusion, I'm going to make this brutally clear.
First of all, digital is a lossy format. It's impossible to fully capture the signal, but it will never degrade. A one is always a one. You can simply provide enough data so that your ears won't know the difference. Analog (ie: LP, cassette) is the only true lossless format - aside from a live performance. Unforunately it's vulnerable to environmental damage, and easily shows any weakness in the quality of the equipment replicating the sound.
Now, onto digital encoding. As I said, digital signals are lossy. When you buy the CD, there is already some loss as compared to the source tracks. This means that there is some portions of the signal that are missing when compared to the original. This data is unrecoverable. It's gone. Poof. It only remains with the original. Now, when you take that already lossy signal and cause even more loss (ie: recode it, whether higher or lower bitrate - it doesn't matter), that data is now gone. Never to come back.
Keep doing this more and more, and it won't matter what bitrate you encode at. It's impossible to recreate any data signal that's nonexistent. Sure, you can make complex algorithms to "attempt" to reconstruct the missing segments, but they can only be so good, and will never be as good as the source. The worse the reconstruction, the worse the final result will be.
So, in conclusion, you only get out what you put in. It is impossible for any data signal to emerge better than it's source, no matter what anyone says. It can simply mask and replicate, not duplicate.
s### in, s### out. Simple as that.