title,body,link,tags,view_count,is_answered,score,answer_count Tools for non regression testing on REST API?,"

I have developed a project with a symfony server as backend and a mobile application as frontend.

I am unemployed but I hope to start my own business :)

What tool(s) should I use for non regression testing ?

Thank you for your help!

",https://stackoverflow.com/questions/73990741/tools-for-non-regression-testing-on-rest-api,"['symfony', 'testing', 'automated-tests', 'integration-testing']",58,False,0,1 Error Message when running train() in R for regression testing,"

I am trying to use the following dataset that I downloaded: https://www.kaggle.com/datasets/zaheenhamidani/ultimate-spotify-tracks-db?resource=download

and when I try to run decision tree regression testing I get the following error message:

Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold01.Rep1: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
  need at least two non-NA values to interpolate

    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold02.Rep1: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold03.Rep1: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold04.Rep1: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold05.Rep1: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold06.Rep1: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold07.Rep1: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold08.Rep1: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold09.Rep1: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold10.Rep1: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold01.Rep2: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold02.Rep2: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold03.Rep2: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold04.Rep2: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold05.Rep2: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold06.Rep2: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold07.Rep2: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold08.Rep2: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold09.Rep2: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold10.Rep2: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold01.Rep3: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold02.Rep3: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold03.Rep3: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold04.Rep3: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold05.Rep3: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold06.Rep3: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold07.Rep3: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold08.Rep3: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold09.Rep3: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold10.Rep3: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold01.Rep4: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold02.Rep4: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold03.Rep4: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold04.Rep4: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold05.Rep4: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold06.Rep4: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold07.Rep4: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold08.Rep4: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold09.Rep4: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold10.Rep4: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold01.Rep5: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold02.Rep5: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold03.Rep5: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold04.Rep5: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold05.Rep5: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold06.Rep5: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold07.Rep5: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold08.Rep5: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold09.Rep5: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold10.Rep5: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: There were missing values in resampled performance measures.Something is wrong; all the RMSE metric values are missing:
          RMSE        Rsquared        MAE     
     Min.   : NA   Min.   : NA   Min.   : NA  
     1st Qu.: NA   1st Qu.: NA   1st Qu.: NA  
     Median : NA   Median : NA   Median : NA  
     Mean   :NaN   Mean   :NaN   Mean   :NaN  
     3rd Qu.: NA   3rd Qu.: NA   3rd Qu.: NA  
     Max.   : NA   Max.   : NA   Max.   : NA  
     NA's   :3     NA's   :3     NA's   :3   

I have looked at several questions already on stackoverflow and I can not figure out what it wrong. There are no missing values in the dataset and when I condense the data to just the Rap genre, there are no duplicate values. Here is my code:

```{r}
library(tidyverse)
library(caret)
library(ROCR)
library(MLmetrics)
library(mltools)
library(rpart.plot)
```

```{r}
#1 Load the data and ensure the labels are correct. You are working to develop a model that can predict age.  
# http://archive.ics.uci.edu/ml/datasets/Adult

setwd("myfilepath")

#names <- c("genre", "artist_name", "track_name", "track_id", "popularity", "acousticness", "danceability", "duration_ms", "energy", "instrumentalness", "key", "liveness", "loudness", "mode", "speechiness", "tempo", "time_signature", "valence")

df <- read_csv("myfilepath")

View(df)


table(df$genre)

rap <- df[df$genre == 'Rap',]
View(rap)

```

```{r}
#2 Ensure all the variables are classified correctly including the target variable and collapse factors if still needed. 
rap2 <- rap[, -c(1:4)]
View(rap2)
str(rap2)

rap2[,c("key", "mode", "time_signature")] <- lapply(rap2[,c("key", "mode", "time_signature")] , as.factor)
table(rap2$time_signature)

str(rap2)

summary(rap2)

sum(is.na(rap2)) 

histogram(rap2$popularity,type='count', nint=30)

```

```{r}
#4 Split your data into test, tune, and train. (80/10/10)

set.seed(1)
part_index_1 <- caret::createDataPartition(rap2$popularity,  # split the data with a .8 probability so that 80% of the data is chosen
                                           times=1,
                                           p = 0.80,
                                           groups=1,
                                           list=FALSE)

train <- rap2[part_index_1, ]  # subset the 80% chosen in the first partition into the train set
tune_and_test <- rap2[-part_index_1, ]  # subset the remaining in a tune and test set

set.seed(1)
View(tune_and_test)

tune_and_test_index <- caret::createDataPartition(tune_and_test$popularity,  # now split the tune and test set 50-50
                                           p = .5,
                                           list = FALSE,
                                           times = 1)

tune <- tune_and_test[tune_and_test_index, ]  # subset the 50% chosen into the tune set
test <- tune_and_test[-tune_and_test_index, ]  # subset the remaining 50% into the test set

dims <- data.frame("Train Size" = nrow(train), "Tune Size" = nrow(tune), "Test Size" = nrow(test))  # create a data frame of the sizes of each set and output the dataframe
dims

```

```{r}
#5 Build your model using the training data, rpart2, and repeated cross validation as reviewed in class with the caret package.

View(train)
features <- train[,-1] #dropping 1 because it's target variable. 
View(features)
target <- train$popularity

target

str(features)

str(target)
#Three steps in building a caret ML model
#Step 1: Cross validation process-the process by which the training data will be used to build the initial model must be set. As seen below:

fitControl <- trainControl(method = "repeatedcv",
                          number = 10,
                          repeats = 5) 
View(fitControl)
# number - number of folds
# repeats - number of times the CV is repeated, takes the average of these repeat rounds
#review the documentation on https://topepo.github.io/caret/measuring-performance.htm

#Step 2: Usually involves setting a hyper-parameter search. This is optional and the hyper-parameters vary by model. Let's take a look at the documentation for the model we are going to use. Same search function as for classification 

tree.grid <- expand.grid(maxdepth=c(3:20))

#  2^(k+1)−1 = maximum number of terminal nodes (splits) when k=depth of the tree
#let's look at the documentation in two places 
# for the tune grid function: https://topepo.github.io/caret/model-training-and-tuning.html

#options for the rpart2: https://topepo.github.io/caret/train-models-by-tag.html#tree-based-model

#Step 3: Train the models
set.seed(1984)
rap_mdl_r <- train(x=features,
                y=target,
                method="rpart2",
                trControl=fitControl,
                metric="RMSE")
rap_mdl_r
",https://stackoverflow.com/questions/74661895/error-message-when-running-train-in-r-for-regression-testing,"['r', 'machine-learning', 'r-caret']",46,False,0,0 Jetpack Compose Ui testing - TextField - Random Key events - Regression testing,"

I uploaded my app in the google play console for review and i got some crashes pre launch report. And on doing some research , i found that its bug inside jetpack compose TeXtFiled and its created my Google monkey test regressions. I referred the Official Android Developer docs and its been recommended to do ui test automation. I want to know how automate regression test on a textfield performing some random key events . I believe its causing when pressing Del, Backspace or Ctrl+ Backspace on a texttfield . Here is the Stack Trace :

Fatal Exception: java.lang.IllegalArgumentException: end cannot negative. [end: -1]
at androidx.compose.ui.text.TextRangeKt.packWithCheck(TextRangeKt.java:1)
at androidx.compose.ui.text.input.EditingBuffer.delete$ui_text_release(EditingBuffer.java)
at androidx.compose.ui.text.input.DeleteSurroundingTextCommand.applyTo(DeleteSurroundingTextCommand.java:2)
at androidx.compose.ui.text.input.EditProcessor.apply(EditProcessor.java:2)
at androidx.compose.foundation.text.TextFieldKeyInput.apply(TextFieldKeyInput.java:2)
at androidx.compose.foundation.text.TextFieldKeyInput.access$apply(TextFieldKeyInput.java:59)
at androidx.compose.foundation.text.TextFieldKeyInput$process$2.invoke(TextFieldKeyInput.java:59)
at androidx.compose.foundation.text.TextFieldKeyInput$process$2.invoke(TextFieldKeyInput.java:59)
at androidx.compose.foundation.text.TextFieldKeyInput.commandExecutionContext(TextFieldKeyInput.java:14)
at androidx.compose.foundation.text.TextFieldKeyInputKt$textFieldKeyInput$2$1.invoke-ZmokQxo(TextFieldKeyInputKt.java:14)
at androidx.compose.foundation.text.TextFieldKeyInputKt$textFieldKeyInput$2$1.invoke(TextFieldKeyInputKt.java:14)
at androidx.compose.ui.node.ModifiedKeyInputNode.propagateKeyEvent-ZmokQxo(ModifiedKeyInputNode.java:5)
at androidx.compose.ui.input.key.KeyInputModifier.processKeyInput-ZmokQxo(KeyInputModifier.java:9)
at androidx.compose.ui.platform.AndroidComposeView.sendKeyEvent-ZmokQxo(AndroidComposeView.java:9)
at androidx.compose.ui.platform.AndroidComposeView.dispatchKeyEvent(AndroidComposeView.java:9)
at android.view.ViewGroup.dispatchKeyEvent(ViewGroup.java:1840)
at android.view.ViewGroup.dispatchKeyEvent(ViewGroup.java:1840)
at android.view.ViewGroup.dispatchKeyEvent(ViewGroup.java:1840)
at android.view.ViewGroup.dispatchKeyEvent(ViewGroup.java:1840)
at com.android.internal.policy.DecorView.superDispatchKeyEvent(DecorView.java:444)
at com.android.internal.policy.PhoneWindow.superDispatchKeyEvent(PhoneWindow.java:1819)
at androidx.core.view.KeyEventDispatcher.activitySuperDispatchKeyEventPre28(KeyEventDispatcher.java:3)
at androidx.core.app.ComponentActivity.dispatchKeyEvent(ComponentActivity.java:18)
at com.android.internal.policy.DecorView.dispatchKeyEvent(DecorView.java:358)
at android.view.ViewRootImpl$ViewPostImeInputStage.processKeyEvent(ViewRootImpl.java:4979)
at android.view.ViewRootImpl$ViewPostImeInputStage.onProcess(ViewRootImpl.java:4851)
at android.view.ViewRootImpl$InputStage.deliver(ViewRootImpl.java:4385)
at android.view.ViewRootImpl$InputStage.onDeliverToNext(ViewRootImpl.java:4438)
at android.view.ViewRootImpl$InputStage.forward(ViewRootImpl.java:4404)
at android.view.ViewRootImpl$AsyncInputStage.forward(ViewRootImpl.java:4531)
at android.view.ViewRootImpl$InputStage.apply(ViewRootImpl.java:4412)
at android.view.ViewRootImpl$AsyncInputStage.apply(ViewRootImpl.java:4588)
at android.view.ViewRootImpl$InputStage.deliver(ViewRootImpl.java:4385)
at android.view.ViewRootImpl$InputStage.onDeliverToNext(ViewRootImpl.java:4438)
at android.view.ViewRootImpl$InputStage.forward(ViewRootImpl.java:4404)
at android.view.ViewRootImpl$InputStage.apply(ViewRootImpl.java:4412)
at android.view.ViewRootImpl$InputStage.deliver(ViewRootImpl.java:4385)
at android.view.ViewRootImpl$InputStage.onDeliverToNext(ViewRootImpl.java:4438)
at android.view.ViewRootImpl$InputStage.forward(ViewRootImpl.java:4404)
at android.view.ViewRootImpl$AsyncInputStage.forward(ViewRootImpl.java:4564)
at android.view.ViewRootImpl$ImeInputStage.onFinishedInputEvent(ViewRootImpl.java:4733)
at android.view.inputmethod.InputMethodManager$PendingEvent.run(InputMethodManager.java:2430)
at android.view.inputmethod.InputMethodManager.invokeFinishedInputEventCallback(InputMethodManager.java:1993)
at android.view.inputmethod.InputMethodManager.finishedInputEvent(InputMethodManager.java:1984)
at android.view.inputmethod.InputMethodManager$ImeInputEventSender.onInputEventFinished(InputMethodManager.java:2407)
at android.view.InputEventSender.dispatchInputEventFinished(InputEventSender.java:141)
at android.os.MessageQueue.nativePollOnce(MessageQueue.java)
at android.os.MessageQueue.next(MessageQueue.java:331)
at android.os.Looper.loop(Looper.java:149)
at android.app.ActivityThread.main(ActivityThread.java:6662)
at java.lang.reflect.Method.invoke(Method.java)
at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:547)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:873)

How to write Ui test for this and fix this issue?

",https://stackoverflow.com/questions/73523333/jetpack-compose-ui-testing-textfield-random-key-events-regression-testing,"['android-jetpack-compose', 'jetpack', 'android-jetpack-compose-text']",192,False,0,0 Automate UI Tests in a Kotlin Compose Project - equivalent version of Expresso Recorder,"

is there a framework I can use to automate UI regression testing in a Kotlin Compose project? Clicking on Android Studio Run -> Record Expresso Test gives a warning that Espresso Testing Framework does not support Compose projects.

",https://stackoverflow.com/questions/71392337/automate-ui-tests-in-a-kotlin-compose-project-equivalent-version-of-expresso-r,"['android', 'kotlin', 'android-jetpack-compose', 'ui-automation', 'expresso']",394,True,9,1 Git branching strategy - losing changes on merge,"

We've got a branching strategy with develop, release, and master branches. Features/bugs branch off of release. When complete, they merge into develop for QA testing. Once QA passes, they merge (directly from the feature/bug branch) into release for release regression testing. Release merges to master after final regression testing. Hotfixes branch off master, merge back into master.

With all that said, we are sometimes losing changes in release and QA when merging branches in. Developers are merging from release into their branches prior to merging into QA and release, but never merging in from QA.

Are there any conceptual issues with this strategy? Any potential reasons why we may be losing those changes?

",https://stackoverflow.com/questions/70999699/git-branching-strategy-losing-changes-on-merge,"['git', 'merge', 'branching-and-merging']",459,True,1,2 Tools for non regression testing on REST API?,"

I have developed a project with a symfony server as backend and a mobile application as frontend.

I am unemployed but I hope to start my own business :)

What tool(s) should I use for non regression testing ?

Thank you for your help!

",https://stackoverflow.com/questions/73990741/tools-for-non-regression-testing-on-rest-api,"['symfony', 'testing', 'automated-tests', 'integration-testing']",58,False,0,1 Azure release pipeline Regression test job failure due to error :- This version of ChromeDriver only supports Chrome version 86,"

I Azure devops, i am getting error while running regression test job in release pipeline. The artifact is hosted in Azure repos. Full error stack is mentioned below. Please help on this scenario.

2022-11-17T07:32:37.8052533Z Tests in error: 
2022-11-17T07:32:37.8053113Z   getData(Academy.BrowserTest): session not created: This version of ChromeDriver only supports Chrome version 86(..)
2022-11-17T07:32:37.8054011Z 
2022-11-17T07:32:37.8054528Z Tests run: 2, Failures: 0, Errors: 1, Skipped: 0
2022-11-17T07:32:37.8054792Z 
2022-11-17T07:32:37.8123963Z [INFO] ------------------------------------------------------------------------
2022-11-17T07:32:37.8124674Z [INFO] BUILD FAILURE
2022-11-17T07:32:37.8125946Z [INFO] ------------------------------------------------------------------------
2022-11-17T07:32:37.8148852Z [INFO] Total time:  01:11 min
2022-11-17T07:32:37.8150695Z [INFO] Finished at: 2022-11-17T07:32:37Z
2022-11-17T07:32:37.8151259Z [INFO] ------------------------------------------------------------------------
2022-11-17T07:32:37.8166640Z [ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.12.4:test (default-test) on project devops: There are test failures.
2022-11-17T07:32:37.8169627Z [ERROR] 
2022-11-17T07:32:37.8170486Z [ERROR] Please refer to D:\a\r1\a\_GBSAutomationTestRepo.git\target\surefire-reports for the individual test results.
2022-11-17T07:32:37.8171162Z [ERROR] -> [Help 1]
2022-11-17T07:32:37.8171546Z [ERROR] 
2022-11-17T07:32:37.8172050Z [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
2022-11-17T07:32:37.8172644Z [ERROR] Re-run Maven using the -X switch to enable full debug logging.
2022-11-17T07:32:37.8172947Z [ERROR] 
2022-11-17T07:32:37.8173332Z [ERROR] For more information about the errors and possible solutions, please read the following articles:
2022-11-17T07:32:37.8174196Z [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
2022-11-17T07:32:37.8760477Z The process 'C:\ProgramData\chocolatey\lib\maven\apache-maven-3.8.6\bin\mvn.cmd' failed with exit code 1
2022-11-17T07:32:37.8761396Z Could not retrieve code analysis results - Maven run failed.
2022-11-17T07:32:39.7735190Z Result Attachments will be stored in LogStore
2022-11-17T07:32:39.8293757Z Run Attachments will be stored in LogStore
2022-11-17T07:32:39.9011567Z ##[error]Build failed.
2022-11-17T07:32:39.9091599Z ##[section]Async Command Start: Publish test results
2022-11-17T07:32:40.2939741Z Publishing test results to test run '18'.
2022-11-17T07:32:40.2969758Z TestResults To Publish 2, Test run id:18
2022-11-17T07:32:40.3013991Z Test results publishing 2, remaining: 0. Test run id: 18
2022-11-17T07:32:41.6218305Z Published Test Run : https://dev.azure.com/gbsr2rindiarepos/AZGBSDemoProject01/_TestManagement/Runs?runId=18&_a=runCharts
2022-11-17T07:32:42.0349797Z Flaky failed test results are opted out of pass percentage
2022-11-17T07:32:42.0701567Z ##[section]Async Command End: Publish test results
2022-11-17T07:32:42.0703521Z ##[section]Finishing: Maven D:\a\r1\a/_GBSAutomationTestRepo.git/pom.xml

Perform regression testing via an implemented Release pipeline job in "de.azure.com" project

",https://stackoverflow.com/questions/74471769/azure-release-pipeline-regression-test-job-failure-due-to-error-this-version,"['azure', 'azure-devops', 'azure-pipelines']",108,False,0,1 How to run external python file and show data in front end of django in the form of table,"

Hi Guys I want to run my external py scipt and the output will be on json I want to show the data in form of rows and table in front end, the data is dynamic regression testing data. how to I show that using django

",https://stackoverflow.com/questions/74871268/how-to-run-external-python-file-and-show-data-in-front-end-of-django-in-the-form,"['python', 'django']",38,True,-1,1 Scripted jenkins pipeline slackSend is not displaying vertical color line,"

In Jenkins Scripted (groovy) pipeline, I have below code line

slackSend channel: '#Regression-Testing-Result', 
color: (currentBuild.result.equals("SUCCESS")) ? "good" : "danger",
message: (currentBuild.result.equals("SUCCESS")) ? "Tests passed" : "Tests failed"

Issue : Tests passed or Tests failed message is printed in slack channel (Regression-Testing-Result) but the colored vertical line is not printed (vertical green line for success, vertical red line for failure)

Jenkins version : 2.319.3

Slack Upload Plugin version used : 1.7

",https://stackoverflow.com/questions/74285149/scripted-jenkins-pipeline-slacksend-is-not-displaying-vertical-color-line,"['jenkins', 'groovy', 'slack']",203,False,0,1 Check the database server name in a Postgres stored Procedure,"

Is there any way to check the database server name in a Postgres stored Procedure?

I have a Stored Procedure to clear all dynamic data, to be used before Regression Testing an App that loads messages into the database.

I want to make sure it is only ever used in the DEV environment, not accidentally copied across to the Production Server and used on the LIVE database.

",https://stackoverflow.com/questions/70678363/check-the-database-server-name-in-a-postgres-stored-procedure,['postgresql'],213,False,0,0 How to set up Visual Regression of react-chartjs-2 component,"

I am trying to set up visual regression testing for react-chartjs-2 components with React Testing library. However, all of the snapshots that are being generated are blank, but the component renders properly in the browser.

This is what I'm currently testing. I basically combined this blog post example with the pie chart example from react-chartjs-2.

import React from 'react';
import {generateImage, debug} from 'jsdom-screenshot';
import {render} from '@testing-library/react';
import {Pie} from "react-chartjs-2";

it('has no visual regressions', async () => {
    window.ResizeObserver =
        window.ResizeObserver ||
        jest.fn().mockImplementation(() => ({
            disconnect: jest.fn(),
            observe: jest.fn(),
            unobserve: jest.fn(),
        }));

    const data = {
        labels: ['Red', 'Blue', 'Yellow', 'Green', 'Purple', 'Orange'],
        datasets: [
            {
                label: '# of Votes',
                data: [12, 19, 3, 5, 2, 3],
                backgroundColor: [
                    'rgba(255, 99, 132, 0.2)',
                    'rgba(54, 162, 235, 0.2)',
                    'rgba(255, 206, 86, 0.2)',
                    'rgba(75, 192, 192, 0.2)',
                    'rgba(153, 102, 255, 0.2)',
                    'rgba(255, 159, 64, 0.2)',
                ],
                borderColor: [
                    'rgba(255, 99, 132, 1)',
                    'rgba(54, 162, 235, 1)',
                    'rgba(255, 206, 86, 1)',
                    'rgba(75, 192, 192, 1)',
                    'rgba(153, 102, 255, 1)',
                    'rgba(255, 159, 64, 1)',
                ],
                borderWidth: 1,
            },
        ],
    };
    render(<div><Pie data={data}/></div>)
    expect(await generateImage()).toMatchImageSnapshot();
});

I am wondering if it's a timing issue. Running debug() before the expect shows a canvas with 0 width and height:

<canvas
  height="0"
  role="img"
  style="display: block; box-sizing: border-box; height: 0px; width: 0px;"
  width="0"
/>
",https://stackoverflow.com/questions/70856931/how-to-set-up-visual-regression-of-react-chartjs-2-component,"['jestjs', 'react-testing-library', 'jsdom', 'react-chartjs']",382,True,2,1 What's the best tools or technique to test non-regression of calculated data in web application,"

I will start working on non regression testing of a tracking web application. The purpose of test automation is to validate the non-regression of calculated and generated values by the application under test between two versions of the same application. As this is my first time testing this type of application, I'm not sure if selenium is the right tool to do these tests. Has anyone done a test like this before? Could you suggest other tools or testing techniques?

",https://stackoverflow.com/questions/70932781/whats-the-best-tools-or-technique-to-test-non-regression-of-calculated-data-in,"['selenium', 'automated-tests']",101,True,0,1 How to script user input using Firebase Game Loop testing on Android?,"

I'm interested in scripting some user actions for regression testing my app as I publish updates. I have read this firebase doc and this Google page on Game Loop and I clearly am not getting it.

Am I supposed to script the user actions I want to mimic by writing Java code here?:

I thought I would be able to run the app on a device and record user actions as a scripting mechanism. Is that not how this works?

",https://stackoverflow.com/questions/70856380/how-to-script-user-input-using-firebase-game-loop-testing-on-android,"['android', 'firebase', 'firebase-test-lab']",178,True,0,1 Error Message when running train() in R for regression testing,"

I am trying to use the following dataset that I downloaded: https://www.kaggle.com/datasets/zaheenhamidani/ultimate-spotify-tracks-db?resource=download

and when I try to run decision tree regression testing I get the following error message:

Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold01.Rep1: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
  need at least two non-NA values to interpolate

    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold02.Rep1: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold03.Rep1: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold04.Rep1: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold05.Rep1: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold06.Rep1: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold07.Rep1: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold08.Rep1: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold09.Rep1: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold10.Rep1: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold01.Rep2: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold02.Rep2: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold03.Rep2: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold04.Rep2: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold05.Rep2: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold06.Rep2: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold07.Rep2: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold08.Rep2: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold09.Rep2: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold10.Rep2: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold01.Rep3: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold02.Rep3: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold03.Rep3: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold04.Rep3: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold05.Rep3: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold06.Rep3: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold07.Rep3: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold08.Rep3: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold09.Rep3: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold10.Rep3: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold01.Rep4: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold02.Rep4: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold03.Rep4: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold04.Rep4: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold05.Rep4: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold06.Rep4: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold07.Rep4: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold08.Rep4: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold09.Rep4: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold10.Rep4: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold01.Rep5: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold02.Rep5: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold03.Rep5: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold04.Rep5: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold05.Rep5: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold06.Rep5: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold07.Rep5: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold08.Rep5: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold09.Rep5: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: Setting row names on a tibble is deprecated.Warning: predictions failed for Fold10.Rep5: maxdepth=3 Error in approx(x[, "nsplit"], x[, "CP"], depth) : 
      need at least two non-NA values to interpolate
    Warning: There were missing values in resampled performance measures.Something is wrong; all the RMSE metric values are missing:
          RMSE        Rsquared        MAE     
     Min.   : NA   Min.   : NA   Min.   : NA  
     1st Qu.: NA   1st Qu.: NA   1st Qu.: NA  
     Median : NA   Median : NA   Median : NA  
     Mean   :NaN   Mean   :NaN   Mean   :NaN  
     3rd Qu.: NA   3rd Qu.: NA   3rd Qu.: NA  
     Max.   : NA   Max.   : NA   Max.   : NA  
     NA's   :3     NA's   :3     NA's   :3   

I have looked at several questions already on stackoverflow and I can not figure out what it wrong. There are no missing values in the dataset and when I condense the data to just the Rap genre, there are no duplicate values. Here is my code:

```{r}
library(tidyverse)
library(caret)
library(ROCR)
library(MLmetrics)
library(mltools)
library(rpart.plot)
```

```{r}
#1 Load the data and ensure the labels are correct. You are working to develop a model that can predict age.  
# http://archive.ics.uci.edu/ml/datasets/Adult

setwd("myfilepath")

#names <- c("genre", "artist_name", "track_name", "track_id", "popularity", "acousticness", "danceability", "duration_ms", "energy", "instrumentalness", "key", "liveness", "loudness", "mode", "speechiness", "tempo", "time_signature", "valence")

df <- read_csv("myfilepath")

View(df)


table(df$genre)

rap <- df[df$genre == 'Rap',]
View(rap)

```

```{r}
#2 Ensure all the variables are classified correctly including the target variable and collapse factors if still needed. 
rap2 <- rap[, -c(1:4)]
View(rap2)
str(rap2)

rap2[,c("key", "mode", "time_signature")] <- lapply(rap2[,c("key", "mode", "time_signature")] , as.factor)
table(rap2$time_signature)

str(rap2)

summary(rap2)

sum(is.na(rap2)) 

histogram(rap2$popularity,type='count', nint=30)

```

```{r}
#4 Split your data into test, tune, and train. (80/10/10)

set.seed(1)
part_index_1 <- caret::createDataPartition(rap2$popularity,  # split the data with a .8 probability so that 80% of the data is chosen
                                           times=1,
                                           p = 0.80,
                                           groups=1,
                                           list=FALSE)

train <- rap2[part_index_1, ]  # subset the 80% chosen in the first partition into the train set
tune_and_test <- rap2[-part_index_1, ]  # subset the remaining in a tune and test set

set.seed(1)
View(tune_and_test)

tune_and_test_index <- caret::createDataPartition(tune_and_test$popularity,  # now split the tune and test set 50-50
                                           p = .5,
                                           list = FALSE,
                                           times = 1)

tune <- tune_and_test[tune_and_test_index, ]  # subset the 50% chosen into the tune set
test <- tune_and_test[-tune_and_test_index, ]  # subset the remaining 50% into the test set

dims <- data.frame("Train Size" = nrow(train), "Tune Size" = nrow(tune), "Test Size" = nrow(test))  # create a data frame of the sizes of each set and output the dataframe
dims

```

```{r}
#5 Build your model using the training data, rpart2, and repeated cross validation as reviewed in class with the caret package.

View(train)
features <- train[,-1] #dropping 1 because it's target variable. 
View(features)
target <- train$popularity

target

str(features)

str(target)
#Three steps in building a caret ML model
#Step 1: Cross validation process-the process by which the training data will be used to build the initial model must be set. As seen below:

fitControl <- trainControl(method = "repeatedcv",
                          number = 10,
                          repeats = 5) 
View(fitControl)
# number - number of folds
# repeats - number of times the CV is repeated, takes the average of these repeat rounds
#review the documentation on https://topepo.github.io/caret/measuring-performance.htm

#Step 2: Usually involves setting a hyper-parameter search. This is optional and the hyper-parameters vary by model. Let's take a look at the documentation for the model we are going to use. Same search function as for classification 

tree.grid <- expand.grid(maxdepth=c(3:20))

#  2^(k+1)−1 = maximum number of terminal nodes (splits) when k=depth of the tree
#let's look at the documentation in two places 
# for the tune grid function: https://topepo.github.io/caret/model-training-and-tuning.html

#options for the rpart2: https://topepo.github.io/caret/train-models-by-tag.html#tree-based-model

#Step 3: Train the models
set.seed(1984)
rap_mdl_r <- train(x=features,
                y=target,
                method="rpart2",
                trControl=fitControl,
                metric="RMSE")
rap_mdl_r
",https://stackoverflow.com/questions/74661895/error-message-when-running-train-in-r-for-regression-testing,"['r', 'machine-learning', 'r-caret']",46,False,0,0 "Undo or revert a single, earlier GitLab cherry-pick commit?","

I committed like 10 individual cherry-picked changes to my new release branch, and now after regression testing found that one of those commits in the middle may have broken something. I am prepared to release a new version of the code but exclude that commit. Must I start over cherry-picking into the new release branch to do this right, or is there a way I can simply copy my current branch into a new release branch and revert or undo the single offending commit without losing those that came before or after it?

",https://stackoverflow.com/questions/74607261/undo-or-revert-a-single-earlier-gitlab-cherry-pick-commit,"['gitlab', 'cherry-pick']",27,False,0,1 Uncaught ReferenceError: autocomplete is not defined,"

We currently have some Javascript that used to work, however we recently underwent a system upgrade, and we are doing some regression testing.

Now our script is not recognizing our function anymore:

Uncaught ReferenceError: autocomplete is not defined

However, it seems like the issue is related to how the function is defined, and my JS is unfortunately very basic.

Our declaration is:

<script type="text/javascript">
(function (global, factory) {
  typeof exports === 'object' && typeof module !== 'undefined' ? module.exports = factory() :
  typeof define === 'function' && define.amd ? define(factory) :
  (global = global || self, global.autocomplete = factory());
}
(this, function () { 'use strict';
/*
     * https://github.com/kraaden/autocomplete
     * Copyright (c) 2016 Denys Krasnoshchok
     * MIT License
     */         
function autocomplete(settings) {

Any assistance or guidance would be greatly appreciated. Thanks in advance.

Edit: Just an additional error received when other JS on the page executes:

Uncaught Error: Mismatched anonymous define() module: function () { 'use strict';

",https://stackoverflow.com/questions/72628875/uncaught-referenceerror-autocomplete-is-not-defined,['javascript'],272,False,0,1 Does refactoring inline variables require a regression test if it's the only change to a function?,"

Say you have a PHP function getId that calls a function from another class:

public function getFoosForBar(int $bar): array
{
    $helperClass = new HelperClass();
    return $helperClass->getFoos($bar);
}

You, as the smart cookie you are, recognise that $helperClass is an inline variable, and you want to refactor it to:

public function getFoosForBar(int $bar): array
{
    return (new HelperClass)->getFoos($bar);
}

Your IDE (in my case, PHPStorm) also recognises it is an inline variable and offers the same solution, so you make the change. Does this trigger a regression test for all functions that call getFoosForBar?

I've tried googling this several times, as I'm new in the programming career path, so I am unsure if this should trigger regression testing, or any testing for that matter, especially if they're not automated. I appreciate your input!

",https://stackoverflow.com/questions/72123446/does-refactoring-inline-variables-require-a-regression-test-if-its-the-only-cha,"['php', 'regression-testing']",26,False,0,0 How to scroll a Ionic-Page via JavaScript Test-code,"

I'm working on an Ionic 6 Webapp based on Angular 13. The client's QA-department want's to perform regression-testing via Selenium test-automatization. For other project's they used window.scrollBy(0, window.innerHeight) to systematically scroll over the page and take screenshots to find regression issues. But this is not possible on Ionic pages, since the HTML body is not scrollable, only the content of the ion-content element. Is there any way to trigger scrolling within the ion-content element via simple JavaScript? I created a Stackblitz where you can see the basic structure of my ionic-page. So far I tried different things but none worked:

document.getElementsByTagName("ion-content")[0].scrollTo(0, 300);

document.getElementsByTagName("ion-content")[0].scrollToBottom();

document.getElementsByTagName("ion-content")[0].shadowRoot.childNodes[1].scrollTo(0, 300); //tried to access the inner-scroll div

document.getElementsByTagName("ion-content")[0].shadowRoot.childNodes[1].scrollToBottom(); //tried to access the inner-scroll div

",https://stackoverflow.com/questions/71769001/how-to-scroll-a-ionic-page-via-javascript-test-code,"['javascript', 'angular', 'testing', 'ionic-framework', 'scroll']",451,False,0,1 How should I be using playwright's toHaveScreenshot() within a cucumber test in a React Typescript project?,"

I'm wanting to implement visual regression testing into a ReactJS app. I already have playwright setup called through cucumber for some other BDD UI tests and wanted to make use of the built in toHaveScreenShot method for visual regression. However, whenever I run the test it throws this error:

Error: toHaveScreenshot() must be called during the test

Here's the test script definition:

package.json excerpt

"test:e2e": "cucumber-js --require cucumber.conf.js --require features/step_definitions/**/*.js --format @cucumber/pretty-formatter",

Here's an example of the code:

cucumber.conf.js

const {
  Before,
  BeforeAll,
  AfterAll,
  After,
  setDefaultTimeout,
} = require("@cucumber/cucumber");
const { chromium } = require("playwright");

// in milliseconds
setDefaultTimeout(60000);

// launch the browser
BeforeAll(async function () {
  global.browser = await chromium.launch({
    headless: false,
    slowMo: 1000,
  });
});

// close the browser
AfterAll(async function () {
  await global.browser.close();
});

// Create a new browser context and page per scenario
Before(async function () {
  global.context = await global.browser.newContext();
  global.page = await global.context.newPage();
});

// Cleanup after each scenario
After(async function () {
  await global.page.close();
  await global.context.close();
});

homepage.feature

Feature: Homepage
A simple homepage

    Scenario: Someone visiting the homepage
        Given a new visitor to the site
        When they load the homepage
        Then they see the page

homepage.js

const { Given, When, Then } = require("@cucumber/cucumber");
const { expect } = require("@playwright/test");

Given("a new visitor to the site", function () {});

When("they load the homepage", async () => {
  await page.goto("http://localhost:3000/");
});

Then("they see the page", async () => {
  const locator = page.locator('img[alt="An image you expect to see"]');
  await expect(locator).toBeVisible();
  await expect(locator).toHaveScreenshot();
});

I think the error is complaining that I'm not writing my tests in the usual test() method, but I've not come across anything similar in searches and don't know how to give this context, assuming that is the problem, while using Cucumber.

Can anyone suggest a solution? I'm at a bit of a loss.

",https://stackoverflow.com/questions/73040364/how-should-i-be-using-playwrights-tohavescreenshot-within-a-cucumber-test-in,"['reactjs', 'typescript', 'cucumber', 'playwright']",749,False,1,0 "In Maven, what is the difference between a unit test and an integration test?","

I am adding basic regression testing to a Maven project that has no automated testing. My initial idea was to create a number of test classes, called IT<whatever>.java, to run in the integration-test phase. However, during packageing we do obfuscation and optimization and I want to be sure that the tests run against the final JAR (or at least the final classes).

The thing is, I can't tell from reading the docs what the actual difference is between the two kinds of test. The docs mention that integration-test is run after package, which sounds promising, but the tests are excluded from the JAR so it's unlikely they're running against the final artifact. Are they run against the packaged classes? Or is the only distinction between the two test types that they are run in different phases of the build lifecycle?

",https://stackoverflow.com/questions/71069528/in-maven-what-is-the-difference-between-a-unit-test-and-an-integration-test,"['maven', 'integration-testing', 'maven-surefire-plugin', 'maven-failsafe-plugin']",240,False,0,0 Net Core - Dynamic Assembly resolution,"

Question:

Is it possible to dynamically resolve dependencies in runtime for Core ? Thus, not update the 'Api App' with Nuget package B v1.0.5 ?

In general, I wonder if there are a best practices for such cases

Context:

P.s I've gone through a list of articles about the theme, but haven't find out if there any alternative to Package B updating. And my intention to avoid it, if possible

",https://stackoverflow.com/questions/72474492/net-core-dynamic-assembly-resolution,"['c#', '.net', '.net-assembly', 'assembly-binding-redirect']",70,False,0,0 How can I disable React Query Devtools in my playwright visual tests?,"

I want to use Playwright for local visual regression testing. The problem is I have ReactQuery Devtools installed and so my visual snapshots all have that open and displayed, covering up a bunch of the content I want to protect against visual regressions.

I could make it so the test clicks the close button. That would mean I only get the little ReactQuery icon displayed, but if these tests work well I may want to use them in CI, so I don't really want any visual discrepancies between local and CI renders.

What I'm wondering is if there's something I could put in my test to disable the Devtools even though process.env.NODE_ENV === 'development'.

Note: I tried launching the tests, and the dev server with the NODE_ENV environment variable set to testing. NextJS warned me that was a bad idea, and it did nothing to help :/

",https://stackoverflow.com/questions/74563753/how-can-i-disable-react-query-devtools-in-my-playwright-visual-tests,"['playwright', 'react-query']",40,True,0,2 "How can I get a separate reference to each version/artifact of the same published java package, in one JVM (stack = sbt + scala)?","

I am considering building a small tool for running regression testing to find regressions in data between two releases of the same project.

So far, i can import the project, and can even import both versions without compilation failures.

name := "diffing-tool"

val sparkVersion = "3.1.2"
resolvers += Resolver.publishMavenLocal
resolvers += Resolver.defaultLocal
libraryDependencies ++= Seq(
  "org.apache.spark" %% "spark-core" % sparkVersion,
  "org.apache.spark" %% "spark-sql" % sparkVersion,
  "com.foo" %% "bar" % "1.0",
  "com.foo" %% "bar" % "1.1"

)

My code can compile and run the following code


object Hello extends App {
  println("Hello world")
  val fromJar1 = com.foo.bar.Run.main()
  val fromJar2 = com.foo.bar.Run.main()

  def diff(fromJar1: Any, fromJar2: Any) = {
    println("Hello world")
  }

  diff(fromJar1, fromJar2)
}

However, I do not know a good way to rename the packages/artifacts such that the code can refer to each version of the code separately.

Ideas of what this would look like include

  1. the use of shading in the build.sbt file (which is usually used for dependencies of dependencies, and I have not yet figured out what to do in my current use case -- also, i would like to avoid sbt assembly if possible which is coupled with shading if i am not mistaken). ill need detailed instructions on how to set this up.
  2. Another approach is, at runtime, to use reflection, the ClassLoader, or perhaps something from the sbt.librarymanagement package, to get a separate reference to each artifact. Will need detailed instructions on how to set this up.
  3. Other ideas are very welcome.
",https://stackoverflow.com/questions/73194448/how-can-i-get-a-separate-reference-to-each-version-artifact-of-the-same-publishe,"['scala', 'maven', 'reflection', 'jvm', 'sbt']",45,False,0,0 How to specify path parameters when targeting an API Gateway with EventBridge,"

I have an ECS service sat behind an API Gateway, for a subset of paths (/foo/{proxy+}) using ANY method, the API Gateway proxies the requests to the ECS service.

I have an EventBridge Bus with a rule attached to it - this rule matches a subset of events that are sent over the bus (and this filtering is important - which is why we aren't using SQS). This rule has a target which points to a path (foo/events/ebevent) on the API Gateway, and we want to POST the event to it.

The infrastructure is defined as follows using Terraform:

resource "aws_cloudwatch_event_bus" "test_bus" {
  name = "test_bus"
}

resource "aws_cloudwatch_event_rule" "test_rule" {
  name        = "my_sample_rule"
  description = "Matches some events."

  event_bus_name = aws_cloudwatch_event_bus.test_bus.name

  event_pattern = <<EOF
{
  "detail-type": ["test_rule"]
}
EOF
}

resource "aws_cloudwatch_event_target" "test_api_gw_proxy_target" {
  #devstage corresponds to the deployed stage on the API Gateway
  #POST corresponds to the desired HTTP method
  arn            = "${var.payment_processor_execution_arn}/devstage/POST/foo/events/ebevent"
  rule           = aws_cloudwatch_event_rule.test_rule.name
  event_bus_name = aws_cloudwatch_event_bus.test_bus.name
  http_target {
    path_parameter_values = ["foo/events/ebevent"]
  }
}

The events are sending through to the API GW and are being proxied to the ECS task as expected. The issue is that the slashes in the URL are encoded as %2f - this means that when the event gets to the ECS container, the URL is foo%2fevents%2febevent - since the slash is encoded, it doesn't match any patterns defined in the router on the ECS task image, and the request fails out. Is there any way to stop this URL string from being encoded by EventBridge?

The alternative is to alter the routing configuration in the ECS image (in order to handle decoding the slashes), but there's concerns over regression testing such a significant change to the router - as it could potentially impact other requests to the task.

",https://stackoverflow.com/questions/72563311/how-to-specify-path-parameters-when-targeting-an-api-gateway-with-eventbridge,"['amazon-web-services', 'terraform', 'aws-api-gateway', 'amazon-ecs', 'aws-event-bridge']",963,True,1,1 Adding an attribute to XmlWriter causes all namespaces to become aliased,"

Here is a line of XML found in an Excel book I created with a PivotTable/Cache in it:

<pivotCacheDefinition xmlns="http://schemas.openxmlformats.org/spreadsheetml/2006/main" xmlns:r="http://schemas.openxmlformats.org/officeDocument/2006/relationships" r:id="rId1" refreshOnLoad="1" refreshedBy="m" refreshedDate="44873.446783912033" createdVersion="4" refreshedVersion="4" minRefreshableVersion="3" recordCount="4">

I am using XmlWriter (and System.IO.Packaging) to modify this XML to cause it to not use cached values and instead recalculate from the original data every time it's opened (you can do this in Excel, they always forget to). All this needs is an additional attribute in this header, savedata="0".

We use similar code to rewrite the worksheets holding the data, so I simply copypasta it and changed the element names to produce this:

Dim WR As XmlWriter
Dim WRSettings As XmlWriterSettings
WRSettings = New XmlWriterSettings() With {.CloseOutput = False}
pPivotPart.Data = New MemoryStream()
pPivotPart.Data.Position = 0
WR = XmlWriter.Create(pPivotPart.Data, WRSettings)
WR.WriteStartDocument(True)
WR.WriteStartElement("pivotCacheDefinition", cXl07WorksheetSchema)
WR.WriteAttributeString("xmlns", "r", Nothing, cXl07RelationshipSchema)
WR.WriteAttributeString("r", "Id", Nothing, "rId" & RelId)
WR.WriteAttributeString("saveData", "0")
'the rest of the lengthy code creates the other attributes and then copies over the original XML line by line

The r:id attribute is causing a problem. When it is present, XmlWriter adds an alias for the main namespace, and I get this:

<x:pivotCacheDefinition xmlns:r="http://schemas.openxmlformats.org/officeDocument/2006/relationships" saveData="0" refreshOnLoad="1" refreshedBy="m" refreshedDate="44816.473130671293" createdVersion="4" refreshedVersion="4" minRefreshableVersion="3" recordCount="4" r:Id="rId1" xmlns:x="http://schemas.openxmlformats.org/spreadsheetml/2006/main">

This makes regression testing... difficult. If I comment out the single line that inserts that attribute, all the aliasing goes away - even if I leave in the line that manually inserts that namespace for it:

<pivotCacheDefinition xmlns:r="http://schemas.openxmlformats.org/officeDocument/2006/relationships" saveData="0" refreshOnLoad="1" refreshedBy="m" refreshedDate="44816.473130671293" createdVersion="4" refreshedVersion="4" minRefreshableVersion="3" recordCount="4" xmlns="http://schemas.openxmlformats.org/spreadsheetml/2006/main">

I thought perhaps that manually adding the r NS was the issue, and tried adding just the attribute and letting XmlWriter add it for me, but that did not do what I expected either, it simply ignored the "r" namespace entirely:

<x:pivotCacheDefinition saveData="0" refreshOnLoad="1" refreshedBy="m" refreshedDate="44816.473130671293" createdVersion="4" refreshedVersion="4" minRefreshableVersion="3" recordCount="4" Id="rId1" xmlns:x="http://schemas.openxmlformats.org/spreadsheetml/2006/main">

It is difficult to find this topic because of PHP's similar-name library, but a few threads I found present seemingly similar code.

Can someone who better understands XmlWriter's NS logic explain what's happening here, and how to avoid the renaming?

UPDATE:

I'm still playing with it using the dox on MS and various posts, and in doing so found this problem occurs if these are the only lines:

WR.WriteStartElement("pivotCacheDefinition", cXl07WorksheetSchema)
WR.WriteAttributeString("xmlns", "r", Nothing, cXl07RelationshipSchema)
WR.WriteAttributeString("r", "Id", cXl07RelationshipSchema, "rId" & RelId)
WR.WriteEndElement()
WR.WriteEndDocument()

As soon as it encounters the second NS the first is renamed. I have tried adding the first NS explicitly, I have tried using ns parameters instead of the xmlns tags, every variation seems to have the same effect: as soon as you insert an element on the r namespace the first one gets renamed x.

",https://stackoverflow.com/questions/74405206/adding-an-attribute-to-xmlwriter-causes-all-namespaces-to-become-aliased,"['xmlwriter', 'system.io.packaging']",17,False,0,0 “Visual Studio is Busy” / Hang and seemingly random errors in VS 2022,"

I have been using Visual Studio for 30+ years (since it came on a dozen floppies. Likely wasn't called VS until Win 3.1). Over the past month or two, I’ve had more crash/hang/Visual Studio is Busy, than all years combined (maybe a slight exaggeration). It’s junk. I have no idea where to go.

Our project has been going on for five years. It is made of several Solutions. Four are workers running in Docker (Linux) on top of Windows 10. All components are updated (.Net 6). There is a Winforms Solution and a Xamarin Solutions. We thought the Workers were all set and have spent the past few months on the Xamarin and Winforms Solutions (it’s a self-funded startup. No resources for proper regression testing at every release of VS).

We did the usual delete bin and obj directories, rebuild etc. I wish I could narrow it down, but I can’t find a pattern. Except it usually gives the “VS is busy, reporting to Microsoft” when loading a solution. Sometimes there is an error in a generated Docker file (Value cannot be null). Or, “the required Operating system is not available”. After rebooting, it is magically available.

In the past, we’ve debugged with eight copies of VS running and had no issues. The build machine had 96 gigs memory, i9.

Is there any way to roll back to a 32-bit version or another older version of Visual Studio? I haven’t considered JetBrains Rider before but am looking at it now (I know it doesn’t support Winforms).

I understand I am not providing detail enough for a solution but maybe if you had similar issues in the past, any guess would be helpful.

Thanks!

More Info:

I run a Worker, it loads fine. I run another worker (in a different copy of VS) and I get the following error. This is from a generated Docker file. No changes. I then close VS; reopen and run. It works fine.

Severity Code Description Project File Line Suppression State Error MSB4064 The "OutDir" parameter is not supported by the "ContainerBuildAndLaunch" task loaded from assembly: Microsoft.VisualStudio.Containers.Tools.Tasks, Version=17.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a from the path: C:\Users\orgth.nuget\packages\microsoft.visualstudio.azure.containers.tools.targets\1.15.1\tools\Microsoft.VisualStudio.Containers.Tools.Tasks.dll. Verify that the parameter exists on the task, the points to the correct assembly, and it is a settable public instance property. PWorker C:\Users\orgth.nuget\packages\microsoft.visualstudio.azure.containers.tools.targets\1.16.1\build\Container.targets 230

",https://stackoverflow.com/questions/72892638/visual-studio-is-busy-hang-and-seemingly-random-errors-in-vs-2022,"['docker', 'visual-studio']",1228,True,1,1 "Best Practices for Maintaining and Updating Large, Private NPM Package Ecosystem","

What is the easiest/best way to maintain and keep up-to-date NPM packages in a large, internal ecosystem?

Say we have a number of different private NPM packages, each a project with its own repository, that are consumed in a tree-like fashion. Note that this diagram is a very simplified version of the problem for discussion purposes. The actual dependency tree is easily 10x this complex. If you need to make a change to a low-level package, what is the best/easiest way to update the rest of the tree?

Example: Breaking Change to a Low-Level Package

We'll assume all of these packages are currently on v1.0.0 for the sake of this example. To update @adc/package-1, one must:

This in total requires six PRs, since each package is its own repository. Most of these updates are just updating the version numbers. It also requires that the PRs are merged in a particular order. This is a lot of overhead for a simple change, and again this is very simplified in comparison to the actual package ecosystem that may exist.

What happens if we don't update the mid-level packages?

The top-level websites will now be using two different versions of @adc/package-1.

I believe this can lead to non-deterministic behavior, and it also makes it difficult for the next developer to know what versions they can/should use if they need to make an update. The mid-level packages would become out-of-date and eventually it would require us to update all @adc/ package versions to the latest. In a world where you try to mitigate risk, this would mean regression testing the entire package and all consuming packages/websites.

",https://stackoverflow.com/questions/72692260/best-practices-for-maintaining-and-updating-large-private-npm-package-ecosystem,"['node.js', 'npm', 'package', 'dependencies', 'yarnpkg']",224,True,2,1 Testing complex edge cases with Junit,"

I need to write a bunch of tests for a method that's hidden behind a few layers of logic and I'm relatively new to Junit and entirely new to parameterized tests and need help to understand if parameterized tests is an appropriate solution for my task.

Goal

I want a good and idiomatic way to test using a series of datasets with subtle differences in logic to make sure that the result matches the expectation, basically test edge cases. The method I want to test both takes and generates a list of maps. Let's say the maps are of individual cars and their attributes. A pre-defined document of rules and relationships between attributes outlines if one attribute is valid compared to other attributes. Access to external frameworks and libs limited but Junit4, Junit5, mockito and other basics available.

For each tested dataset, I want to test rules like:

Current tests use arbitrarily picked combinations of attributes and rules like the three above. These happen to give decent test coverage according to Junit, on top of that the code is covered by regression testing. So while 100% edge-case coverage isn't the goal, finding an efficient way to improving coverage of paths is interesting both for the knowledge share and to improve stability and enable potential refactoring. Also apart from improving my skills it would score me inofficial office points, which is nice.

If I'm looking for a solution that would be too complex to be maintanable I would be happy to hear so as well.

Question

  1. What is the most idiomatic way to test edge cases on a method call like buildCars below? Assume you have 5-10 edge cases /tests of similar nature, and at least a few dozen similar methods that could benefit from improved testing structure and coverage.
  2. Would a parameterized test taking a list of <map of input, boolean expectation> work?
  3. Noob question regarding parameterized tests (assuming that is the best solution), as hinted in this thread, would I need to implement multiple parameterized tests to pass into same stream or is there a better way?

Generalized code snippet for clarification

In the example below you see that we take a list of maps from the database, then based on several conditions plenty of attributes are set

 /**
 * This method and others like it take input from 
 * database and returns a list of maps that I want to 
 * improving testing on by testing edge cases
 * according to a list of rules.
 */
 public static List<Car> buildCars(List<HashMap<String, Object>> sqlResults) {
        List<Car> carList = new ArrayList<>();
        int idCounter = 0;

        for ( HashMap<String, Object> sqlRecord : sqlResults) {
            if (!isCar(sqlRecord.isCar())) continue;
            carList.add(idCounter, setCarAttributes(sqlRecord));
            idCounter += 1;
        }
        return carList; // List of the objects to be tested so that they matches the expectations and relationships outlined in mapping document
    }

 /** TODO: This method needs refactoring, 
 * but first better unit tests to cover the 
 * complex conditions that exists in the aforementioned
 * mapping document of rules and relations
 */
private static Car setCarAttributes(HashMap<String, Object> input) {
        Car car = new Car();

        // Set a bunch of attributes based on mapping related to brands and other stuff
        car.setBlinkers(input.get(ATTR_MANUFACTURER).equals(MAN_BMW) ? false : true);
        car.setHasAC(input.get(EQUIPMENT_AC).equals(true) ? true : false);
        car.setHorsePower(input.get(ATTR_COUNTRYORIGIN).equals(FRANCE) ? 69 : 9001);

        // Set some more complex attributes, here just one but could also be nested with more attributes
        if ((int) input.get(TOP_SPEED) >= 88 && input.get(ATTR_MANUFACTURER).equals(MAN_DMC) &&  input.containsKey("Flux Capacitor")) {
            car.setSpeedyness("Great Scott!");
            setTimeTravelAttributes(car);
        } else if ((int) input.get(TOP_SPEED) <= 200) {
            car.setSpeedyness("Snail");
            setSlowCarAttributes(car);
        } else if ((int) input.get(TOP_SPEED) <= 300) {
            car.setSpeedyness("not Snail");
        } else {
            car.setSpeedyness("Twin turbo snail");
            setSuperFastCarAttributes(car);
        }

        // set more nested attributes
        // then set even more attributes based on weird logic

        return car;
    }
",https://stackoverflow.com/questions/72987285/testing-complex-edge-cases-with-junit,"['java', 'unit-testing', 'junit']",74,False,0,0