Free online tool for machine learning regression tasks. Upload your data, train models, and visualize results with ease.
Upload a CSV or Excel file containing your features (X) and target variable (y).
Training model and generating results...
Upload CSV or Excel files with your dataset. Easily select target and feature columns with our intuitive interface.
Adjust all SVR parameters including kernel type, C value, epsilon, and gamma with interactive sliders.
Interactive charts showing actual vs predicted values, residuals, and feature importance.
Handle missing data, normalize features, and split datasets with just a few clicks.
Evaluate model performance with k-fold cross-validation to ensure reliable results.
Download predictions, save trained models, or make new predictions with the trained model.
Organize your data in a spreadsheet with features in columns and samples in rows. Save as CSV or Excel format.
Click "Browse" to select your file. The system will automatically detect columns.
Choose which column contains your target variable (y) and which columns to use as features (X).
Adjust parameters like kernel type, C value, and epsilon based on your needs or use defaults.
Click "Train Model" and view performance metrics and visualizations when complete.
The kernel function transforms your data into a higher dimensional space to find a linear separation.
C controls the trade-off between smooth decision boundary and classifying training points correctly.
Try values between 0.1 and 100, adjusting based on validation performance.
Epsilon defines the margin of tolerance where no penalty is given to errors.
Typical values range from 0.01 to 0.5 depending on noise in data.
Gamma defines how much influence a single training example has (RBF/poly kernels).
For RBF kernel, try 'scale' or 'auto' first before custom values.
The Radial Basis Function (RBF) kernel is a good default choice as it works well for most nonlinear problems. Use the linear kernel if you suspect your data has a linear relationship or if you're working with very large datasets (it's faster to compute). The polynomial kernel can capture polynomial relationships (adjust the degree parameter), while the sigmoid kernel is similar to neural network activation functions.
You can compare performance using different kernels by training multiple models and comparing their evaluation metrics.
The C parameter controls the trade-off between achieving a smooth decision boundary and correctly classifying training points. It's a regularization parameter:
Start with C=1 and adjust up or down based on validation performance. For noisy data, smaller C values often work better.
Support Vector Machines (including SVR) are sensitive to the scale of features because they use distance measures in their optimization. Features on larger scales can dominate the model. Our tool provides several scaling options:
We recommend always scaling your data unless you have a specific reason not to.
Our tool provides several metrics to evaluate your SVR model:
Compare these metrics between training and test sets to check for overfitting. The visualizations (actual vs predicted, residuals) also help assess model quality.
Our web application can handle moderately sized datasets (up to about 50,000 samples with 20-30 features) efficiently. For very large datasets:
If you encounter performance issues, try sampling your data or using fewer features. The training time will be displayed so you can monitor progress.