What parameters are used to check the significance of the model and the goodness of fit?

To check whether the overall model fit is significant or not, the primary parameter to be looked at is the F-statistic. While the t-test, along with the p-values for betas, tests whether each coefficient is significant or not individually, the F-statistic is a measure to determine whether the overall model fit with all the coefficients is significant or not. 

The basic idea behind the F-test is that it is a relative comparison between the model that you have built and the model without any of the coefficients, except for β0. If the value of the F-statistic is high, it would mean that the Prob(F) would be low and, hence, you can conclude that the model is significant. On the other hand, if the value of F-statistic is low, it might lead to the Prob(F) being higher than the significance level (taken as 0.05 usually), which, in turn, would conclude that the overall model fit is insignificant and the intercept-only model can provide a better fit.

Apart from that, to test the goodness or the extent of fit, we look at a parameter called R-squared (for simple linear regression models) or adjusted R-squared (for multiple linear regression models). If your overall model fit is deemed to be significant by the F-test, you can go ahead and look at the value of R-squared. This value lies between 0 and 1, with 1 meaning a perfect fit. A higher value of R-squared is indicative of the model being good with much of the variance in the data being explained by the straight line fitted. For example, an R-squared value of 0.75 means that 75% of the variance in the data is being explained by the model. But it is important to remember that R-squared only tells the extent of the fit and should not be used to determine whether the model fit is significant or not.