Ambisonics is a spatial audio format describing a sound subject. First-order Ambisonics (FOA) is a well-liked format comprising solely 4 channels. This restricted channel depend comes on the expense of spatial accuracy. Ideally one would have the ability to take the effectivity of a FOA format with out its limitations. We have now devised a data-driven spatial audio answer that retains the effectivity of the FOA format however achieves high quality that surpasses typical renderers. Using a completely convolutional time-domain audio neural community (Conv-TasNet), we created an answer that takes a FOA enter and offers a better order Ambisonics (HOA) output. This information pushed method is novel when in comparison with typical physics and psychoacoustic based mostly renderers. Quantitative evaluations confirmed a 0.6dB common positional imply squared error distinction between predicted and precise third order HOA. The median qualitative ranking confirmed an 80% enchancment in perceived high quality over the normal rendering method.