Transition to the cloud
The Thinkwise platform supports a large variety of deployment options. You can run applications on Linux or Windows, on webservers or in containers, on local hardware or on cloud infrastructure (see also, Reference architecture and the environment setup documentation). For improved scalability, built-in high availability, and additional flexibility, we recommend running the Thinkwise platform on cloud-native services.
Benefits of a cloud environment
The main benefits of moving to a cloud environment are:
- Scalability - Cloud infrastructure can provide automatic scaling based on demand. Your application scales up during peak usage and scales down when demand decreases, with costs matching actual consumption.
- Cost efficiency - Cloud also shifts spending from capital expenditure (CAPEX) on hardware purchases to operational expenditure (OPEX) for cloud services. You no longer purchase, maintain, or replace physical servers. The cloud provider handles hardware maintenance, security patches, and infrastructure updates.
- High availability - High availability is built in through automatic replication across multiple data centers. If one location fails, your application continues running from another. Cloud platforms also have automated backups and disaster recovery, allowing you to always be able to restore your applications to a specific point in time. This kind of recoverability can be difficult to implement on your own hardware.
Available cloud models
Organizations can deploy their infrastructure using three distinct cloud models, each offering different trade-offs between control, cost, and flexibility.
A private cloud runs entirely on infrastructure dedicated to a single organization, and confusingly enough, the term is used for both hosting on-premises or at a third-party provider. This setup offers maximum control over security, compliance, and customization, making it ideal for organizations with strict regulatory requirements.
A public cloud uses shared infrastructure managed by providers like AWS, Azure, or Google Cloud, where multiple customers share the same physical servers. This model delivers cost efficiency and scalability because organizations pay only for what they use without maintaining hardware. The Thinkwise Cloud, for example, operates as a public cloud solution, providing managed hosting for Thinkwise applications while we handle infrastructure and platform updates.
A hybrid cloud combines private and public cloud environments, allowing organizations to keep sensitive workloads in a private environment while utilizing the public cloud for less critical applications or handling peak demand. This flexibility lets organizations balance control, cost, and performance based on their specific needs. However, hybrid clouds introduce additional complexity in setup, security, and management, requiring coordination between different environments.
Steps to the cloud
Moving to a truly cloud-native deployment requires preparation. Your database, file storage, integrations, and authentication mechanisms all need adjustment to work in a cloud environment. This guide walks you through the required steps to prepare your application for any cloud environment, such as Azure, AWS, Google Cloud, or the Thinkwise Cloud.
The transition process consists of the following steps:
- Transition to the Universal UI
- Prepare your application for the cloud
- Set up your cloud environment
- Set up file storage replication
- Migrate to the cloud environment
- Perform integration and acceptance testing
- The final deployment
- Explore cloud capabilities
0. Transition to the Universal UI
Before transitioning to the cloud, you need to transition your application to the Universal UI. The Universal UI uses a 3-tier architecture with Indicium as the service tier, which is required for cloud deployment.
Follow our seven-step transition guide to smoothly transition your application to the Universal UI.
1. Prepare your application for the cloud
Cloud environments support most Thinkwise platform features, but some capabilities require adjustment or replacement. The extent of preparation needed depends on your current implementation and chosen cloud database configuration.
The information in this step is focused on preparing a Microsoft SQL Server database for the cloud.
If you are running an IBM DB2 database or an Oracle database, there might be some features that are not supported in the cloud.
Choose your database configuration
Microsoft SQL Server in the cloud offers three deployment options. Each option provides different capabilities and limitations.
SQL Server on a virtual machine - Provides the same capabilities as an on-premise installation. If you choose this option, you can skip the database preparation steps below and proceed to the next section.
Azure SQL Managed Instance - Offers most SQL Server features in a managed environment. Some limitations apply, but most functionality remains available. We recommend using a managed instance for your databases on the DEV and TEST environments.
Azure SQL Database - The most limited option, but also the most flexible and scalable. This configuration requires the most preparation work but provides the best cloud-native experience. We recommend using this type of database for your databases on the ACC and PROD environments.
This guide focuses on Azure SQL Database preparation. For detailed comparisons between deployment options, see Choosing between Azure SQL Database and SQL Managed Instance.
Replace unsupported features
Several SQL Server features are not supported on cloud-native databases. You need to replace these features before migration.
- Linked servers and cross-database queries - Direct SQL access to other databases is not available. Replace linked server queries with web connections or application connectors. The IAM database cannot be queried directly from your application database.
- The
USEstatement no longer works. Queries can only run on the connected database. - Temporary tables - The
tempdbdatabase remains accessible but with limitations. You cannot create physical tables intempdb. Use temporary tables (#table) or create actual tables in your data model instead. - CLR assemblies - Common Language Runtime assemblies are not supported. Replace CLR functionality with stored procedures or web services accessed through web connections.
- SQL Server Jobs - Cloud Databases do not support SQL server-based jobs. Replace scheduled jobs with system flows in your Thinkwise application. System flows provide better control and easier migration across environments.
- Database mail - Replace
sp_send_dbmailand Database Mail with the Email system flow solution from the Thinkstore. This solution uses Indicium to send emails instead of the database server, improving security and enabling modern authentication methods like Microsoft Graph API. - File system access - Functions like
xp_DirTreeandOPENDATASOURCEthat access the file system are not available. Migrate all file storage to use the file storage connector in the Software Factory. - Extended stored procedures - The following procedures are not supported:
xp_cmdshell- No access to underlying hardware. Use Infrastructure-as-code services for infrastructure automation.fn_get_sql- Returns SQL statement text for a specified handle. Microsoft plans to remove this function.fn_virtualfilestats- Returns I/O statistics for database files.fn_virtualservernodes- Provides failover cluster information.
- Configuration procedures - Replace
sp_configurewithALTER DATABASE SCOPED CONFIGURATION. See ALTER DATABASE SCOPED CONFIGURATION for details. - Custom messages - Replace
sp_addmessagewithtsf_send_message. This provides message translation capabilities and parameter support for better user communication. - FILESTREAM storage - Convert FILESTREAM data to blob storage. FILESTREAM is not supported in cloud environments.
- Time zone handling - We recommend storing all datetime values in UTC to maintain a consistent absolute reference across all environments and users.
Configure domains to automatically convert UTC values to each user's local timezone for display.
tip
If migrating existing datetime columns to UTC is not realistic for your application, use timezone conversion functions to maintain your current timezone. When automatically filling datetime fields with default values or in stored procedures, use
CAST(SYSDATETIMEOFFSET() AT TIME ZONE 'Central European Standard Time' AS DATETIME2)instead ofSYSDATETIME(). This ensures datetime values remain in the timezone your application already uses, preventing data inconsistencies after transitioning. - Reserved keywords - Rename any domain called "JSON". This is now a reserved keyword and datatype in SQL Server, causing guaranteed errors in Azure SQL Database.
- Authentication - Azure SQL Database only supports SQL Server authentication and Microsoft Entra ID (formerly Azure AD). Because Indicium only requires a pool user for database access, it is strongly recommended to remove all user accounts from your database. Convert Windows-authenticated users in IAM to External accounts using Microsoft Entra as an OpenID provider.
- External integrations - Third-party tools like reporting services or Python/R integration are not supported and require replacement. Check if the Thinkwise platform now provides this functionality natively. For custom integrations, implement web services accessed through web connections.
For more information about cloud databases, see Transact-SQL differences between SQL Server and Azure SQL Database, Amazon RDS for Microsoft SQL Server or Google Cloud SQL for SQL Server features.
Test database compatibility
Verify your database is cloud-ready by creating a BACPAC file. SQL Server generates error messages for any incompatible features during BACPAC creation.
To test database compatibility:
SQL Server Management Studio
- Right-click your database and select Tasks > Export Data-tier Application.
- Complete the export wizard.
- Review any error messages that appear.
- Resolve all reported errors.
- Repeat the export until it completes successfully.
A successful BACPAC export indicates your database is likely ready for cloud deployment. It is, however, not a guarantee that a database can be deployed on a cloud environment. Depending on the cloud environment, there might be additional limitations that you only discover when executing the first restore of a database.
Prepare your integrations
How you connect to external applications and services depends on your cloud setup. On a private or hybrid cloud, integrations on your internal network might remain accessible and may not require changes. On a public cloud, all integrations must be reachable from the internet.
We recommend using REST APIs for all of your integrations. This ensures your integrations work across all environments and are not tied to network topology or file system access. It also makes future transitions easier.
Integrations built on APIs and web connections work without modification in any cloud environment, provided the endpoints are available over the internet. If you have integrations that are not yet web-based, now is the right time to convert them. Use Web connections in the Software Factory and Intelligent Application Manager to connect to external services.
Reports
Cloud environments only support DevExpress reports. Crystal Reports is not supported, and all reports must be converted to DevExpress reports before migrating.
We recommend converting reports to DevExpress reports by hand. DevExpress has an import tool for Crystal Reports, but the import quality is inconsistent, and manual correction is often required. You should only have to convert the report itself; it should not be necessary to change the underlying view / data structure.
- Thinkwise Academy - How to build a Report using DevExpress (Specialist).
Printing
Direct printing works differently in the cloud. Because your application likely no longer runs on the same network as your printers, print jobs must be sent over the internet using an API. The exact implementation depends on your printer hardware and vendor. Two examples of API-based printing solutions are:
- Azure Universal Print - Microsoft's cloud-based print infrastructure, integrated with Microsoft Entra ID.
- Zebra Cloud Connect - A cloud printing solution for Zebra label and receipt printers.
We recommend using a reporting queue for all print jobs. The Reporting service solution from the Thinkstore adds a reporting queue to your model, allowing you to schedule and manage report generation without relying on direct printer access. Reports can be sent to a file storage location or forwarded to a printer accessible via API.
Multi-select printing
The Windows GUI supports printing multiple reports at once using multi-row selection. The Universal UI handles this differently. To replicate this behavior, use a task with a multi-select parameter to insert multiple items into the reporting queue. The queue then processes and prints each report in sequence.
For more information on setting up a reporting queue, see Reporting service.
Prepare your file storage
Cloud environments handle file storage differently than on-premise infrastructure. All file access must go through cloud storage services rather than direct file system paths.
Local files - If your application stores or reads files from a user's local computer, this will not work in the cloud. Web browsers restrict direct access to the local file system for security reasons. Any functionality that relies on local file paths must be updated to use blob storage instead.
File shares - On a public cloud, your application cannot access file shares on your company's internal network. Replace internal file share access with cloud blob storage. The file storage connector in the Software Factory is the recommended way to handle this. For more information, see File storage.
FILESTREAM storage - As mentioned in the list of database limitations, FILESTREAM storage is not supported on cloud databases. Convert any file storage using filestreams to blob storage. If you do not do this, you will not be able to export and restore your database.
Prepare your authentication
Local Active Directory is not available in a public cloud environment. We recommend switching to SSO (Single Sign-On) using OpenID. SSO enables Multi-Factor Authentication, moving beyond password-only login and significantly improving security.
SSO with MFA also removes the need for a VPN. Once configured, users can access your application directly over the internet. Conditional Access policies can add an additional layer of access control, ensuring only authorized users from whitelisted locations or devices can reach your environments.
You can further streamline your users' login experience by utilizing Web domains to set up different login pages for employees, administrators, or customers. This offers the flexibility to only show the relevant options on each login page and automate SSO.
2. Set up your cloud environment
Before migrating your application, you need to establish your cloud infrastructure. The setup process varies depending on your chosen cloud provider.
For detailed setup instructions, see the following guides:
If you are moving to the Thinkwise Cloud, you do not need to set up your own cloud environment. Thinkwise manages the infrastructure for you. Skip this step and proceed to Step 3 – Set up file storage replication.
DNS configuration
When configuring DNS records for your cloud application, do not map directly to IP addresses. Cloud infrastructure uses dynamic IP allocation, so IP addresses can change. Instead, point DNS records to service endpoints or fully qualified domain names provided by your cloud platform.
Performance
Cloud performance depends heavily on proper resource sizing and network configuration. Several factors can impact how your application performs in the cloud. Use the Server-Timing responses in your browser Developer tools to identify potential bottlenecks.
Database sizing - Choose an appropriate database tier for your workload. Undersized databases cause slow query performance and timeouts. Start with a tier that matches your on-premise database performance, then adjust based on actual use. You can scale up or down as needed.
Compression and HTTP/2 - Make sure GZIP / Brotli compression and HTTP/2 are enabled on your webserver, as these significantly improve the performance of Thinkwise applications. Exact configuration depends on your infrastructure and underlying Operating System.
Network configuration - Network architecture significantly impacts performance. Hybrid cloud setups that route traffic through on-premise VPN connections add latency.
Web Application Firewall - Some WAF rules can block legitimate requests or add processing overhead. If you configure your own WAF, test thoroughly to verify it does not interfere with the normal operation of your application.
Security headers - Properly configured security headers are essential for cloud deployments. These headers protect against common web vulnerabilities without impacting performance. For detailed configuration guidance, see Security headers in the reference architecture documentation.
Regional placement - Deploy your application in a cloud region close to your users. An application hosted in Europe will perform worse for users in Asia due to network latency. Most cloud providers offer multiple regions. Choose the one nearest to your primary user base.
Profiling
The profiling capabilities of an Azure SQL Database are not as exhaustive as those of a local Microsoft SQL Server instance.
To support you in your profiling needs we have implemented (support for) the following tools:
Application Insights - Provides detailed telemetry about application performance and errors. This Azure-only service tracks request patterns, response times, and failure rates and can be enabled in Indicium.
Slow Query Log - Identifies underperforming queries. Performance issues often emerge in production due to larger datasets and more active users. Configure the query threshold to match your performance requirements. For more information, see Slow query log.
Application Log - Download this solution from the Thinkstore to log all tasks, subroutines, handlers, and triggers in your application. This tool helps with debugging and auditing.
3. Set up file storage replication
Moving files to the cloud is one of the most time-intensive parts of migration, especially with large datasets. File transfers can take hours or even days, depending on volume. To minimize downtime during final deployment, set up continuous file replication between your current storage and cloud blob storage.
Why replicate early
File storage replication lets you sync files to your cloud environment continuously before go-live. This eliminates the need for massive file transfers on deployment day. During final cutover, you only need to sync files created or modified since the last replication cycle.
Blob storage in the cloud
All cloud file storage should use blob storage. On the Thinkwise Cloud, blob storage is the only supported file storage type. If you currently use file shares or local directories, these must be migrated to blob storage as part of your cloud transition.
Technically, database file storage is also available, but we recommend only using it for small files. If you store larger datasets in your database, it will cause performance issues, especially over a longer period of time.
Set up replication system flows
Create a system flow for each file storage location in your application.
Each system flow should adhere to the following logic:
- Read files from the existing storage location.
- Write files to the corresponding blob storage location in the cloud.
- Track which files have already been synced to avoid redundant transfers.
Schedule these system flows to run at regular intervals. The frequency depends on how often files are created or modified in your application. Common intervals are hourly or daily.
Update absolute file paths
As part of the transition, you need to update file paths to point to the new blob storage locations. This involves more than just changing path values. You must update task logic, add new file storage locations to your model, configure permissions, and clean up deprecated storage locations after a successful transition.
1. Add new file storage locations
Create new file storage locations in the Software Factory that point to your blob storage.
To add a new file storage location:
menu Integration & AI > File storage locations
- In the field Storage location, enter a name for the new blob storage location.
- Select the Storage type and configure the connection details for your cloud blob storage.
- Test the connection to verify access works correctly.
Do not remove old file storage locations yet. Keep both the old and new locations active during the transition period to ensure back-up and rollback options if issues arise with the new storage.
2. Update database file paths
File path values stored in your database reference the old storage location. You need to update these paths to point to the new blob storage location. Create an upgrade script that runs during deployment to update all file path columns.
Example upgrade script:
UPDATE your_table
SET file_path_column = REPLACE(file_path_column, 'old_storage_location', 'new_storage_location')
WHERE file_path_column LIKE 'old_storage_location%'
Test this script thoroughly in your development environment before running it in production. Verify that all file paths update correctly and that files remain accessible after the change.
3. Update task and process flow logic
Review all tasks and process flows that interact with file storage. Update any logic that references the old file storage location to use the new blob storage location instead.
To update a process flow to use the new file storage location:
menu Processes > Process flows
- Search for process flows that use file storage connectors.
- Update connector configurations to reference the new file storage location.
- Test each process flow to confirm that files upload and download correctly from blob storage.
4. Configure permissions
Add the necessary roles and rights to the new file storage location columns. Users need proper permissions to preview, download, and upload files using the new storage.
To configure permissions for the new file storage location:
menu Access control > Roles
- Select the role you want to configure.
- Go to the Tables tab and locate tables with file storage columns.
- Grant the same permissions on new file storage columns that exist on the old columns.
- Verify that the file preview works correctly in the Universal UI.
- Test upload and download functionality with different user roles.
Without proper permissions, preview components will fail to load even if the file exists in blob storage.
5. Remove old file storage
After successful migration and thorough testing, remove the old file storage locations and columns from your model. This cleanup prevents confusion and ensures users cannot accidentally reference deprecated storage.
To remove the old file storage location:
menu Integration & AI > File storage locations
- Verify that all files have been migrated and all paths have been updated.
- Confirm that no tasks or process flows reference the old storage location.
- Remove roles and rights from the old file storage columns in the menu Access control > Roles.
- Delete the old file storage location from your model.
- Drop the old file path columns from your database tables.
Only perform this cleanup after you are certain the migration succeeded and the new storage location works correctly in production.
4. Migrate to the cloud environment
Database migration
If you successfully created a BACPAC file in the database preparation step, your database is ready for cloud migration. You need to create BACPAC files for all databases in your environment, not just your application databases.
Create BACPAC files for the following databases:
- Your application database(s)
- IAM databases for each environment (Development, Test, Acceptance, Production)
Restore these databases in your cloud environment to set up your DTAP environment in the cloud. If you are changing login methods, setting up web domains, or other IAM settings, this is the moment to change this for all of your environments. This is especially important if you are switching from a local Active Directory to an SSO-based solution, Otherwise, your users will not be able to log in to the new cloud environment.
Model migration
You have two options for migrating your Software Factory model to the cloud:
- Migrate the entire Software Factory database.
- Export and import your model into a new Software Factory instance.
Full Software Factory database migration - Create a BACPAC file of your Software Factory database and import it into your cloud environment. This approach preserves all settings, configurations, and model history. We recommend this option when setting up your own cloud environment.
Model export and import - Export your model from the current Software Factory and import it into a new Software Factory instance in the cloud. Both Software Factory instances must run on the same platform version. If your model uses base models, import those first. Attempting to import a model without its base models will result in an error. Base model names must remain exactly the same across environments.
5. Perform integration and acceptance testing
Before going live, verify that all functionality works correctly in your cloud environment. Testing catches configuration issues and ensures your application performs as expected.
Business functionality testing
Key users should test all business-critical functionality in the cloud environment. These users are familiar with these workflows and are able to quickly identify any possible issues.
Integration testing
Run your automated integration tests on the cloud environment. If you use Testwise, execute your full test suite against the cloud deployment.
External integrations
Test all integrations with external services and applications.
Verify the following:
- API connections function correctly over the internet.
- Authentication between systems works as expected.
- Data flows bidirectionally without errors.
- Response times meet your performance requirements.
Performance validation
Monitor application performance during testing. Cloud environments have different performance characteristics than on-premise infrastructure. Watch for slow queries, network latency issues, or timeout errors that did not appear in your local environment.
6. The final deployment
The final deployment moves your production application from the old environment to the cloud. Proper planning and clear communication minimize downtime and user disruption.
Communicate with users
Inform users about the deployment well in advance. Explain what will change and when the application will be unavailable. If you are implementing Multi-Factor Authentication, provide setup instructions before go-live. Users might need information about how to register their MFA devices and to understand the new login process.
Schedule the deployment
Choose a deployment window with minimal business impact. Evenings, weekends, or planned maintenance windows work best. Communicate the exact start and end times to all users.
Deployment steps
Follow these steps during your deployment window:
- Disable the application - Turn off the application in your current environment to prevent users from making changes during migration.
- Final data synchronization - Run the final database backup and file storage replication. This captures all changes made since your last sync.
- Apply infrastructure changes - Verify that all configuration changes made in your acceptance environment are also applied to production. This includes Indicium settings, connection strings, file storage paths, and security configurations.
- Migrate data - Import your latest databases into the cloud production environment.
- Update DNS records - Point your application's domain name to the new cloud environment. DNS changes can take time to propagate, so plan accordingly.
- Activate the application - Start Indicium and the Universal UI in the cloud environment. Then enable your application in the IAM.
- Validate core functionality - Test critical workflows to confirm the application works as expected. Have key users perform quick validation checks.
- Enable user access - Once validation passes, inform users that the application is available.
Monitor post-deployment
Watch the application closely during the first hours after go-live. Check for errors in Application Insights (Azure) and review the Slow query log.
7. Explore cloud capabilities
Your application now runs in the cloud. You can now start taking advantage of cloud-native capabilities, for example, to improve efficiency or reduce costs.
Automate deployments
Set up CI/CD pipelines to automate your deployment process. Azure DevOps, GitHub Actions, or similar tools can automatically deploy model changes from your Software Factory to test and production environments. Automation reduces deployment time, eliminates manual errors, and makes releases predictable. See deployment automation for more information.
Scale based on usage
Cloud resources can scale up and down based on demand. Configure your database and application services to use less expensive tiers during off-hours. If your users work primarily during business hours, scale down compute resources at night and on weekends. Azure Automation or similar services can schedule these changes automatically, reducing costs without impacting user experience.