Stay with your mp3's, converting them will sound like dung any way you do it. AAC should only be used if you're ripping from the CD. Give it a try for yourself, convert a couple mp3's>aac or mp3>wav>aac and stick on some good phones and have a listen.
Both MP3 and AAC are lossy encoders, meaning it drops information from the sound file that you probably won't notice to reduce filesize. On top of that, they encode differently so what what is choosen to be dropped is different between encoders. So when you convert from one lossy format to another you lose even more information. You should only convert to lossy formats from a lossless one, like from WAV to MP3 or from AIFF to AAC.
what if there's no choice. I have most of my music in WMA recorded at a very high bit rate becaseu I've been using Media Center 10. Don't ask me why but I just like I tunes better, the interface, the store, and the Party DJ option. So I have to convert my files. I think I want to go to 192 AAC which sounds good to me.
Because of some confusion, I'm going to make this brutally clear.
First of all, digital is a lossy format. It's impossible to fully capture the signal, but it will never degrade. A one is always a one. You can simply provide enough data so that your ears won't know the difference. Analog (ie: LP, cassette) is the only true lossless format - aside from a live performance. Unforunately it's vulnerable to environmental damage, and easily shows any weakness in the quality of the equipment replicating the sound.
Now, onto digital encoding. As I said, digital signals are lossy. When you buy the CD, there is already some loss as compared to the source tracks. This means that there is some portions of the signal that are missing when compared to the original. This data is unrecoverable. It's gone. Poof. It only remains with the original. Now, when you take that already lossy signal and cause even more loss (ie: recode it, whether higher or lower bitrate - it doesn't matter), that data is now gone. Never to come back.
Keep doing this more and more, and it won't matter what bitrate you encode at. It's impossible to recreate any data signal that's nonexistent. Sure, you can make complex algorithms to "attempt" to reconstruct the missing segments, but they can only be so good, and will never be as good as the source. The worse the reconstruction, the worse the final result will be.
So, in conclusion, you only get out what you put in. It is impossible for any data signal to emerge better than it's source, no matter what anyone says. It can simply mask and replicate, not duplicate.