(V 1.1.0 May 2025)
Synthesise and correlate Likert scales, and similar rating-scale data, with predefined first & second moments (mean and standard deviation), Cronbachâs Alpha, Factor Loadings, and other summary statistics.
LikertMakeR synthesises rating-scale data. Such scales are constrained by upper and lower bounds and discrete increments.
PurposeThe package is intended for:
âreproducingâ or âreverse-engineeringâ rating-scale data for further analysis and visualisation when only summary statistics have been reported,
teaching. Helping researchers and students to better understand the relationships among scale properties, sample size, number of items, etc. â¦
checking the feasibility of scale moments with given scale and correlation properties.
Functions in this version of LikertMakeR are:
lfast() applies a simple Evolutionary Algorithm, based on repeated random samples from a scaled Beta distribution, to approximate predefined first and second moments.
lcor() rearranges the values in the columns of a dataframe so that they are correlated to match a predefined correlation matrix.
makeCorrAlpha constructs a random item correlation matrix of given dimensions and predefined Cronbachâs Alpha.
makeItems() is a wrapper function for lfast() and lcor() to generate synthetic rating-scale data with predefined first and second moments and a predefined correlation matrix.
makeCorrLoadings constructs a item correlation matrix based on factor loadings and factor correlations as might be reported in Exploratory Factor Analysis (EFA) or Structural Equation Modelling (SEM).
makeItemsScale() Generate a dataframe of rating scale items from a summative scale and desired Cronbachâs Alpha.
makePaired() Generate a dataset from paired-sample t-test summary statistics.
correlateScales() generates a multidimensional dataframe by combining several dataframes of rating-scale items so that their summated scales are correlated according to a predefined correlation matrix.
alpha() calculates Cronbachâs Alpha from a given correlation matrix or a given dataframe
eigenvalues() calculates eigenvalues of a correlation matrix, reports on positive-definite status of the matrix and, optionally, displays a scree plot to visualise the eigenvalues
A Likert scale is the mean, or sum, of several ordinal rating scales. They are bipolar (usually âagree-disagreeâ) responses to propositions that are determined to be moderately-to-highly correlated among each other, and capturing various facets of a theoretical construct.
Summated rating scales are not continuous or unbounded.
For example, a 5-point Likert scale that is constructed with, say, five items (questions) will have a summed range of between 5 (all rated â1â) and 25 (all rated â5â) with all integers in between, and the mean range will be â1â to â5â with intervals of 1/5=0.20. A 7-point Likert scale constructed from eight items will have a summed range between 8 (all rated â1â) and 56 (all rated â7â) with all integers in between, and the mean range will be â1â to â7â with intervals of 1/8=0.125.
Technically, because Likert scales, and similar rating scales are bounded with discrete intervals, parametric statistics (such as mean, standard deviation, and correlation) should not be applied to summated rating scales. In practice, however, such parametric statistics are commonly used in the social sciences because:
they are in common usage and easily understood,
results and conclusions drawn from technically-correct non-parametric statistics are (almost) always the same as for parametric statistics for such data.
DâAlessandro et al. (2020) argue that a summated scale, made with multiple items, âapproachesâ an interval scale measure.
Likert-scale items, such as responses to a single 1-to-5 agree-disagree question, should not be analysed by professional or responsible researchers. There is too much random error in a single item. Rensis Likert (1932) designed the scale with the logic that a random overstatement on one item is likely to be compensated by a random understatement on another item, so that, when multiple items are combined, we get a reasonably consistent, internally reliable, measure of the target construct.
Alternative approaches to synthesising scalesTypically, a researcher will synthesise rating-scale data by sampling with a predetermined probability distribution.
For example, the following code will generate a vector of values for a single Likert-scale item, with approximately the given probabilities.
n <- 128
sample(1:5, n, replace = TRUE,
prob = c(0.1, 0.2, 0.4, 0.2, 0.1)
)
This approach is good for testing Likert items but it does not help when working on complete Likert scales, or for when we want to specify means and standard deviations as they might be reported in published research.
The function lfast()
allows the user to specify exact univariate statistics as they might ordinarily be reported. lcor()
will take multiple scales created with lfast()
and rearrange values so that the vectors are correlated.
makeCorrAlpha()
generates a correlation matrix from a predefined Cronbachâs Alpha(), enabling the user to apply makeItems()
to generate scale items that produce an exact Cronbachâs Alpha. makeCorrLoadings()
generates a correlation matrix from factor loadings data, enabling the user to apply makeItems()
to generate multidimensional data.
makeItems()
will generate synthetic rating-scale data with predefined first and second moments and a predefined correlation matrix.
makeItemsScale()
generate a dataframe of rating scale items from a summative scale and desired Cronbachâs Alpha. correlateScales()
generates a multidimensional dataframe by combining several dataframes of rating-scale items so that their summated scales are correlated according to a predefined correlation matrix.
To download and install the package, run the following code from your R console.
From CRAN:
install.packages('LikertMakeR')
The latest development version is available from the authorâs GitHub repository.
library(devtools)
install_github("WinzarH/LikertMakeR")
Generate synthetic rating scales lfast()
lfast(n, mean, sd, lowerbound, upperbound, items = 1, precision = 0)
lfast arguments
n: sample size
mean: desired mean
sd: desired standard deviation
lowerbound: desired lower bound (e.g. â1â for a 1-5 rating scale)
upperbound: desired upper bound (e.g. â5â for a 1-5 rating scale)
items: number of items making the scale. Default = â1â
precision: can relax the level of accuracy of moments. Default = â0â
x <- lfast(
n = 128,
mean = 4.5,
sd = 1.0,
lowerbound = 1,
upperbound = 7,
items = 5
)
lfast() Example: a four-item, seven-point Likert scale with negative-to-positive scores
x <- lfast(
n = 128,
mean = 1.0,
sd = 1.0,
lowerbound = -3,
upperbound = 3,
items = 4
)
lfast() Example: a four-item, five-point Likert scale with moderate precision
x <- lfast(
n = 256,
mean = 3.25,
sd = 1.0,
lowerbound = 1,
upperbound = 5,
items = 5,
precision = 4
)
lfast() Example: an 11-point likelihood-of-purchase scale
x <- lfast(256, 2.5, 2.5, 0, 10)
Correlating vectors of synthetic rating scales lcor()
The function, lcor(), applies a simple evolutionary algorithm to rearrange the values in the columns of a data set so that they are correlated at a specified level. lcor() does not change the values - it swaps their positions in each column so that univariate statistics do not change, but their correlations with other columns do.
lcor() usage lcor(data, target)
lcor() arguments
data: a starter data set of rating-scales
target: the target correlation matrix
n <- 64
x1 <- lfast(n, 3.5, 1.00, 1, 5, 5)
x2 <- lfast(n, 1.5, 0.75, 1, 5, 5)
x3 <- lfast(n, 3.0, 1.70, 1, 5, 5)
x4 <- lfast(n, 2.5, 1.50, 1, 5, 5)
mydat4 <- data.frame(x1, x2, x3, x4)
head(mydat4)
cor(mydat4) |> round(3)
Define a target correlation matrix
tgt4 <- matrix(
c(
1.00, 0.55, 0.60, 0.75,
0.55, 1.00, 0.25, 0.65,
0.60, 0.25, 1.00, 0.80,
0.75, 0.65, 0.80, 1.00
),
nrow = 4
)
lcor() application
new4 <- lcor(data = mydat4, target = tgt4)
cor(new4) |> round(3)
lcor() example #2 three starting columns and a different target correlation matrix
mydat3 <- data.frame(x1, x2, x3)
tgt3 <- matrix(
c(
1.00, -0.50, -0.85,
-0.50, 1.00, 0.60,
-0.85, 0.60, 1.00
),
nrow = 3
)
Apply lcor()
new3 <- lcor(mydat3, tgt3)
cor(new3) |> round(3)
Generate a correlation matrix from Cronbachâs Alpha makeCorrAlpha()
makeCorrAlpha(), constructs a random correlation matrix of given dimensions and predefined Cronbachâs Alpha.
makeCorrAlpha() usage makeCorrAlpha(items, alpha, variance = 0.5, precision = 0)
makeCorrAlpha() arguments
items: âkâ, dimensions (number of rows & columns) of the desired correlation matrix
alpha: target Cronbachâs Alpha (usually positive, must be greater than â-1â and less than â+1â)
variance: standard deviation of values sampled from a normally-distributed log transformation. Default = â0.5â. A value of â0â makes all values in the correlation matrix the same, equal to the mean correlation needed to produce the desired Alpha. A value of â2â, or more, risks producing a matrix that is not positive-definite, so not feasible.
precision: a value between â0â and â3â to add some random variation around the target Cronbachâs Alpha. Default = â0â. A value of â0â produces the desired Alpha, generally exact to two decimal places. Higher values produce increasingly random values around the desired Alpha.
Random values generated by makeCorrAlpha() are volatile. makeCorrAlpha() may not generate a feasible (positive-definite) correlation matrix, especially when variance is high relative to
makeCorrAlpha() will inform the user if the resulting correlation matrix is positive definite, or not.
If the returned correlation matrix is not positive-definite, because solutions are so volatile, a feasible solution still may be possible, and often is. The user is encouraged to try again, possibly several times, to find one.
makeCorrAlpha() examples four variables, Alpha = 0.85 define parametersitems <- 4
alpha <- 0.85
apply makeCorrAlpha() function
cor_matrix_4 <- makeCorrAlpha(items, alpha, variance)
test output with Helper functions
alpha(cor_matrix_4)
eigenvalues(cor_matrix_4, 1)
eight variables, Alpha = 0.95, larger variance define parameters
items <- 8
alpha <- 0.95
variance <- 1.0
apply makeCorrAlpha() function
cor_matrix_8 <- makeCorrAlpha(items, alpha, variance)
test output
alpha(cor_matrix_8)
eigenvalues(cor_matrix_8, 1)
repeated with random variation around Alpha define parameters
precision <- 2
apply makeCorrAlpha() function
cor_matrix_8a <- makeCorrAlpha(items, alpha, variance, precision)
test output
alpha(cor_matrix_8a)
eigenvalues(cor_matrix_8a, 1)
Generate a correlation matrix from factor loadings makeCorrLoadings
makeCorrLoadings() generates a correlation matrix from factor loadings and factor correlations as might be seen in Exploratory Factor Analysis (EFA) or a Structural Equation Model (SEM).
makeCorrLoadings() usage makeCorrLoadings(loadings, factorCor = NULL, uniquenesses = NULL, nearPD = FALSE)
makeCorrLoadings() arguments
loadings: âkâ (items) by âfâ (factors) matrix of standardised factor loadings. Item names and Factor names can be taken from the row_names (items) and the column_names (factors), if present.
factorCor: âfâ x âfâ factor correlation matrix. If not present, then we assume that the factors are uncorrelated (orthogonal), which is rare in practice, and the function applies an identity matrix for factor_cor.
uniquenesses: length âkâ vector of uniquenesses. If NULL, the default, compute from the calculated communalities.
nearPD: (logical) If TRUE, then the function calls the nearPD function from the Matrix package to transform the resulting correlation matrix onto the nearest Positive Definite matrix. Obviously, this only applies if the resulting correlation matrix is not positive definite. (It should never be needed.)
âCensoredâ loadings (for example, where loadings less than some small value (often â0.30â), are removed for ease-of-communication) tend to severely reduce the accuracy of the makeCorrLoadings()
function. For a detailed demonstration, see the file, makeCorrLoadings_Validate.pdf in the package website on GitHub.
Example loadings
factorLoadings <- matrix(
c(
0.05, 0.20, 0.70,
0.10, 0.05, 0.80,
0.05, 0.15, 0.85,
0.20, 0.85, 0.15,
0.05, 0.85, 0.10,
0.10, 0.90, 0.05,
0.90, 0.15, 0.05,
0.80, 0.10, 0.10
),
nrow = 8, ncol = 3, byrow = TRUE
)
row and column names
rownames(factorLoadings) <- c("Q1", "Q2", "Q3", "Q4", "Q5", "Q6", "Q7", "Q8")
colnames(factorLoadings) <- c("Factor1", "Factor2", "Factor3")
Factor correlation matrix
factorCor <- matrix(
c(
1.0, 0.5, 0.4,
0.5, 1.0, 0.3,
0.4, 0.3, 1.0
),
nrow = 3, byrow = TRUE
)
Apply the function
itemCorrelations <- makeCorrLoadings(factorLoadings, factorCor)
round(itemCorrelations, 3)
Assuming orthogonal factors
itemCors <- makeCorrLoadings(factorLoadings)
round(itemCors, 3)
Generate a dataframe of rating scales from a correlation matrix and predefined moments makeItems()
makeItems() generates a dataframe of random discrete values from a scaled Beta distribution so the data replicate a rating scale, and are correlated close to a predefined correlation matrix.
makeItems() is a wrapper function for:
lfast(), which generates a vector that best fits the desired moments, and
lcor(), which rearranges values in each column of the dataframe so they closely match the desired correlation matrix.
makeItems(n, means, sds, lowerbound, upperbound, cormatrix)
makeItems() arguments
n: number of observations to generate
means: target means: a vector of length âkâ of mean values for each scale item
sds: target standard deviations: a vector of length âkâ of standard deviation values for each scale item
lowerbound: vector of length âkâ (same as rows & columns of correlation matrix) of values for lower bound of each scale item (e.g. â1â for a 1-5 rating scale)
upperbound: vector of length âkâ (same as rows & columns of correlation matrix) of values for upper bound of each scale item (e.g. â5â for a 1-5 rating scale)
cormatrix: target correlation matrix: a âkâ x âkâ square symmetric matrix of values ranging between â-1 âandâ+1â, and â1â in the diagonal.
n <- 16
dfMeans <- c(2.5, 3.0, 3.0, 3.5)
dfSds <- c(1.0, 1.0, 1.5, 0.75)
lowerbound <- rep(1, 4)
upperbound <- rep(5, 4)
corMat <- matrix(
c(
1.00, 0.25, 0.35, 0.40,
0.25, 1.00, 0.70, 0.75,
0.35, 0.70, 1.00, 0.80,
0.40, 0.75, 0.80, 1.00
),
nrow = 4, ncol = 4
)
apply function
df <- makeItems(
n = n,
means = dfMeans,
sds = dfSds,
lowerbound = lowerbound,
upperbound = upperbound,
cormatrix = corMat
)
test function
print(df)
apply(df, 2, mean) |> round(3)
apply(df, 2, sd) |> round(3)
cor(df) |> round(3)
Generate a dataframe of rating-scale items from a summated rating scale makeItemsScale()
makeItemsScale(scale, lowerbound, upperbound, items,
alpha = 0.8, variance = 0.5)
makeItemsScale() arguments
scale: a vector or dataframe of the summated rating scale. Should range from (âlowerboundâ * âitemsâ) to (âupperboundâ * âitemsâ)
lowerbound: lower bound of the scale item (example: â1â in a â1â to â5â rating)
upperbound: upper bound of the scale item (example: â5â in a â1â to â5â rating)
items: k, or number of columns to generate
alpha: desired Cronbachâs Alpha. Default = â0.8â
variance: quantile for selecting the combination of items that give summated scores. Must lie between â0â (minimum variance) and â1â (maximum variance). Default = â0.5â.
n <- 64
mean <- 3.5
sd <- 1.00
lowerbound <- 1
upperbound <- 5
items <- 4
meanScale <- lfast(
n = n, mean = mean, sd = sd,
lowerbound = lowerbound, upperbound = upperbound,
items = items
)
summatedScale <- meanScale * items
create items with makeItemsScale()
newItems_1 <- makeItemsScale(
scale = summatedScale,
lowerbound = lowerbound,
upperbound = upperbound,
items = items
)
cor(newItems_1) |> round(2)
alpha(data = newItems_1)
eigenvalues(cor(newItems_1), 1)
makeItemsScale() with same summated values and higher alpha
newItems_2 <- makeItemsScale(
scale = summatedScale,
lowerbound = lowerbound,
upperbound = upperbound,
items = items,
alpha = 0.9
)
cor(newItems_2) |> round(2)
alpha(data = newItems_2)
eigenvalues(cor(newItems_2), 1)
same summated values with lower alpha that may require higher variance
newItems_3 <- makeItemsScale(
scale = summatedScale,
lowerbound = lowerbound,
upperbound = upperbound,
items = items,
alpha = 0.6,
variance = 0.7
)
cor(newItems_3) |> round(2)
alpha(data = newItems_3)
eigenvalues(cor(newItems_3), 1)
Create a dataframe for paired-sample t-test makePaired()
makePaired() generates a dataset from paired-sample t-test summary statistics.
makePaired() generates correlated values so the data replicate rating scales taken, for example, in a before and after experimental design. The function is effectively a wrapper function for lfast() and lcor() with the addition of a t-statistic from which the between-column correlation is inferred.
Paired t-tests apply to observations that are associated with each other. For example: the same people before and after a treatment; the same people rating two different objects; ratings by husband & wife; etc.
makePaired() usagemakePaired(n, means, sds, t_value, lowerbound, upperbound, items = 1, precision = 0)
makePaired() arguments
n <- 20
means <- c(2.5, 3.0)
sds <- c(1.0, 1.5)
lowerbound <- 1
upperbound <- 5
items <- 6
t <- -2.5
pairedDat <- makePaired(n = n, means = means, sds = sds, t_value = t, lowerbound = lowerbound, upperbound = upperbound, items = items)
str(pairedDat)
cor(pairedDat) |> round(2)
pairedMoments <- data.frame(
mean = apply(newDat, MARGIN = 2, FUN = mean) |> round(3),
sd = apply(newDat, MARGIN = 2, FUN = sd) |> round(3)
) |> t()
pairedMoments
t.test(pairedDat$X1, pairedDat$X2, paired = TRUE)
Create a multidimensional dataframe of scale items as we might see from a questionnaire correlateScales()
correlateScales() takes several dataframes of rating-scale items and rearranges their rows so that the scales are correlated according to a predefined correlation matrix. Univariate statistics for each dataframe of rating-scale items do not change, and inter-item correlations within a dataframe do not change, but their correlations with rating-scale items in other dataframes do change.
correlateScales() usagecorrelateScales(dataframes, scalecors)
correlateScales() arguments
dataframes: a list of âkâ dataframes to be rearranged and combined
scalecors: target correlation matrix - a symmetric k*k positive-semi-definite matrix, where âkâ is the number of dataframes
n <- 64
lower <- 1
upper <- 5
attitude #1
cor_1 <- makeCorrAlpha(items = 3, alpha = 0.85)
means_1 <- c(2.5, 2.5, 3.0)
sds_1 <- c(0.9, 1.0, 1.0)
Att_1 <- makeItems(
n, means_1, sds_1,
rep(lower, 4), rep(upper, 4),
cor_1
)
attitude #2
cor_2 <- makeCorrAlpha(items = 3, alpha = 0.80)
means_2 <- c(2.5, 3.0, 3.5)
sds_2 <- c(1.0, 1.5, 1.0)
Att_2 <- makeItems(
n, means_2, sds_2,
rep(lower, 5), rep(upper, 5),
cor_2
)
attitude #3
cor_3 <- makeCorrAlpha(items = 3, alpha = 0.75)
means_3 <- c(2.5, 3.0, 3.5)
sds_3 <- c(1.0, 1.5, 1.0)
Att_3 <- makeItems(
n, means_3, sds_3,
rep(lower, 6), rep(upper, 6),
cor_3
)
correlateScales parameters target scale correlation matrix
scale_cors <- matrix(
c(
1.0, 0.6, 0.5,
0.6, 1.0, 0.4,
0.5, 0.4, 1.0
),
nrow = 3
)
initial data frames
data_frames <- list("A1" = Att_1, "A2" = Att_2, "A3" = Att_3)
apply the correlateScales() function
my_correlated_scales <- correlateScales(
dataframes = data_frames,
scalecors = scale_cors
)
Check the properties of our derived dataframe data structure
str(my_correlated_scales)
inter-item correlations
cor(my_correlated_scales) |> round(2)
eigenvalues of dataframe correlations
eigenvalues(cormatrix = cor(my_correlated_scales), scree = TRUE) |>
round(2)
Helper functions
likertMakeR() includes two additional functions that may be of help when examining parameters and output.
alpha() calculates Cronbachâs Alpha from a given correlation matrix or a given dataframe
eigenvalues() calculates eigenvalues of a correlation matrix, and reports on whether the correlation matrix is positive definite and an optional scree plot
alpha() accepts, as input, either a correlation matrix or a dataframe. If both are submitted, then the correlation matrix is used by default, with a message to that effect.
alpha() usagealpha(cormatrix = NULL, data = NULL)
alpha() arguments
cormatrix: correlation matrix for examination: a square symmetrical matrix with values ranging from â-1â to â+1â and â1â in the diagonal
data: a data frame or data matrix
df <- data.frame(
V1 = c(4, 2, 4, 3, 2, 2, 2, 1),
V2 = c(4, 1, 3, 4, 4, 3, 2, 3),
V3 = c(4, 1, 3, 5, 4, 1, 4, 2),
V4 = c(4, 3, 4, 5, 3, 3, 3, 3)
)
example correlation matrix
corMat <- matrix(
c(
1.00, 0.35, 0.45, 0.70,
0.35, 1.00, 0.60, 0.55,
0.45, 0.60, 1.00, 0.65,
0.70, 0.55, 0.65, 1.00
),
nrow = 4, ncol = 4
)
apply function examples
alpha(cormatrix = corMat)
alpha(data = df)
alpha(NULL, df)
alpha(corMat, df)
eigenvalues()
eigenvalues() calculates eigenvalues of a correlation matrix, reports on whether the matrix is positive-definite, and optionally produces a scree plot.
eigenvalues() usageeigenvalues(cormatrix, scree = FALSE)
eigenvalues() arguments
cormatrix: a correlation matrix.
scree: (logical) default = FALSE. If TRUE (or 1), then eigenvalues() produces a scree plot to illustrate the eigenvalues.
correlationMatrix <- matrix(
c(
1.00, 0.25, 0.35, 0.40,
0.25, 1.00, 0.70, 0.75,
0.35, 0.70, 1.00, 0.80,
0.40, 0.75, 0.80, 1.00
),
nrow = 4, ncol = 4
)
apply function
evals <- eigenvalues(cormatrix = correlationMatrix)
print(evals)
evals <- eigenvalues(correlationMatrix, 1)
print(evals)
To cite LikertMakeR APA:
Winzar, H. (2022). LikertMakeR: Synthesise and correlate Likert-scale
and related rating-scale data with predefined first & second moments,
Version 1.0.1 (2025),
The Comprehensive R Archive Network (CRAN),
<https://CRAN.R-project.org/package=LikertMakeR>
BIB:
@software{winzar2022,
title = {LikertMakeR: Synthesise and correlate Likert-scale
and related rating-scale data with predefined first & second moments},
author = {Hume Winzar},
abstract = {LikertMakeR synthesises Likert scale and related rating-scale data with predefined means and standard deviations, and optionally correlates these vectors to fit a predefined correlation matrix or Cronbach's Alpha.},
journal = {The Comprehensive R Archive Network (CRAN)},
month = {12},
year = {2022},
version = {1.0.1 (2025)}
url = {https://CRAN.R-project.org/package=LikertMakeR},
}
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4