Well, you surely can't import all of the existing DBF into a table with more fields, which grows to the 2GB limit earlier than the original table.
You could, however, import any subset of the 1.8GB table into new DBFs with subsets of the fields.
I wonder why your first step must be to import everything into a larger set of fields. You can always design your normalized final target tables and import data into them step by step. Maybe you need to explain the problems you face with such an approach, but as far as I did migrations, I often used original source data for multiple target tables and thus used the original table multiple times as an origin of data, instead of first designing an intermediate table to accumulate all data into.
And even if you have to, you could work portion-wise.
As we recently talked about VFPA, you know even in its current version it allows the fpt file to outgrow the 2GB limit. That could help, perhaps, if you make use of it and change fixed length char fields and also varchar fields you might have into memo fields.
Well, of course, you could work with SQL Server or any other database for the intermediate steps to have more freedom with the intermediate tables you need for the overall transformation process of your data normalization, VFPA isn't the only alternative allowing larger data sets. I don't know why you would need even larger intermediate tables, just notice that the theoretical concept to normalize data in steps that start from a denormalized form are steps you don't need to do for real, just conceptually and then can also do the necessary transformations from where you are to the final normalized data structure without intermediate steps that blow up the need for storage space.
Chriss