OK, let's take it backwards from
> Eventually, I need to convert my whole application data access logic to a database like sql server.
I thought this is the thing you're already doing, because the thread about syncing data is assuming you sync DBFs data with a SQL Server. If you are syncing LAN DBFs with Hosted DBFs you won't gain something for moving forward to other technologies.
>I want to use my VFP knowledge to build a single vfp data access layer for all types of clients like desktop
That's not the way you should do it. Using VFP code to access data will give you cursors and only a VFP frontend then can really do something with the data. So you go from dbf to cursor and then will need another conversion to XML or even put the data into HTML directly. If you want a modern future proof web frontend you won't use VFP technology. If you don't have any other knowledge, then you need developers or you don't go web in the form of browser apps but use terminal server. Then your app can stay as is and be used from remote.
The disadvantage of terminal servers is you need client cals, it only scales with hardware, you need RAM and CPU resources for every user. This partly also is true for web apps, but web apps can make use of client side code and javascript running on client workstation, not on your (hosted) server. So a web app scales better than an app made available remote by terminal server.
>all the data processing will be handle at server level and it will improve VFP data access speed and time
This is what you expect but it isn't true in all cases. There are advantages and disadvantages of server side processing. The simplest thing to consider is code executed on the server side will need resources per client, so your performance doesn't scale well, you have limitations of RAM and cores you can put into a server, if you want to scale on the server level by using more servers to have more ram and cpu resource you have to use things that are only well established with database server in conjunction with web and application or terminal servers, not with a databse consisting of hosted DBFs, even not SAN will help you with that problem on the file server level, so it's only a fast solution for a small user base.
The bottle neck for a VFP desktop app is DBF file access and only that, if you put your data access to the server side you do nothing else but implement a database server, you get request and respond to them. This is what a database server does and does better than you can implement it, because there is decades of knowledge about how to do that.
Here Claude Fox would say, that activeVFP does that very well, you can use your VFP knowledge to do the data access, but activeVFP is for creating web apps, not feeding desktop apps with data, so while it already can do data access quite nice it does so for creating HTML web frontends, which combine that data with HTML forms and HTML frontend. You can't recycle your frontend code here.
That said let's move far away from details and take it from requirements.
What you should first forget about is the need for centralised data. Depending on the number of users and number of locations you need different things and need to split at different levels.
You can split at the frontend level only by using terminal server/remote desktop. Users will use their input devices mouse, keyboard to control a remote server which will be running the app local in the LA#N of your database and everything can stay, but you need much server resources to serve many users. This is only good for a situation with main user base at a main location and only a few remote users.
You can do the inverse and run your desktop apps with local data at each location independently. You will then do data replication/syncing of all the local databases to one central data repository. This will not be the main database to any location, it will just be there to aggregate all data and also distribute it to all locations. It's best be done using a sql server database which has all the needed logic for database replication inside. How fast you need data at each location or from them determines how much you need to pay for high speed internet connections, private describer lines for example.
If you take the adventure of writing sync code with DBFs you still will rather replicate data to each location via a central location only there to provide data to each locations local database than what intend to do now.
The only good way to have central data for direct usage of many clients is really a cloud based database technology like Azure or and you can forget about your vfp knowledge both for data access and frontend. Even FoxinCloud only namely is a cloud solution. As far as I understand you still host at a single server. The cloud only is the cloud if your requests to the cloud app or data can be handled with multiple servers in different computer/data centers around the world which obviously therefore need to handle replication of all data and code in the cloud. The cloud therefore is more than just the internet. FoxInCloud, which referring to the recent reaction of them referred to by jrbbldr is taking .45 seconds per action. That's because every action is involving the server, FoxInCloud is not converting forms to html forms with code converted to locally running javascript as far as I see. So that unresponsiveness alone wouldn't suffice my needs and I doubt yours. And if you'd get their product for self hosting or installing on hosted windows servers yourself you'd still be bound to the bandwidth and throughput you can rent or host on your own. If you're intending to move forward you have to move away from VFP both for data and for frontend.
Also the global distribution of your location determines whether you need cloud or not. If all locations are within a country a classical web application can work well, but if your users are all over the world some central server will mean slow access for users at the other end of the world. Some hosters then may still be sufficient but it all will then depend on how much request you'll have and that's hard to determine by current desktop application code. the bandwidths of users are something you can't have under full control unless you can rent the needed lines at their local workplaces, if you have a global audience it's not in your hand what internet providers they use. Some have a very good bandwidth but less good response times/ping times, which would be essential for smooth responsive interactions.
And now I can be easily misunderstood if you put all this in your wrong idea of what several terms mean. For example responsive web design is not to gain better performance and responsiveness of a web app but to better respond to the different device resolutions and sizes. So if you're making management decisions without knowing details of terms and their meanings, have an overview of what technologies are, then you're bound to make false decisions.
Bye, Olaf.