(1) Naive reverse (as above):

rev1
[] = []

rev1
(x:xs) = (rev1 xs)++[x]

This is an order n^2 algorithm but
conceptually the simplest and may be viewed as a specification of
reverse.

(2) Naive reverse using fold(right):

(4) We may do a fast reverse using a fold(right)

(5) We may reverse doing a fold left:

(6) We may do a higher-order reverse using a fold (right):

function f: [a] -> [a] which is applied to the empty list to obtain a list. List concatenation is actually being replaced by function composition (represented by a full stop, where f . g = \x -> f (g x) .... which is constant time) to get a linear time algorithm out of essentially what is a naive reverse! There is generally an overhead of dealing with closures so this is likely to be, in practice, a small constant slower than a fast reverse.

Notice to replace concatenation of lists by function composition one replaces a list by a function, thus we want [a1,a2,a3] to becomes (\t -> a1:(a2:t)), in which one abstracts the tail. This is very similar to holding a pointer to the (pointer to the) tail of the list.

(2) Naive reverse using fold(right):

rev2
= foldr (\x xs -> xs ++ [x]) []

we may write this using a "where" clause rather than lambda
abstraction as follows: rev2' = foldr snoc
[] where

snoc
x ys = ys ++ [x]

(3) Fast reverse:rev3 xs = revs xs
[] where

revs [] ys = ys

revs
(x:xs) ys = revs xs (x:ys)

this is a linear time reverse.(4) We may do a fast reverse using a fold(right)

rev4 xs = snd
(foldr shunt (xs,[]) xs) where

shunt
_ (x:xs,ys) = (xs,x:ys)

this is order n but it traverses the list twice ...(5) We may reverse doing a fold left:

rev5
= foldl (\ys x -> x:ys) []

this is order n again ..(6) We may do a higher-order reverse using a fold (right):

rev6
xs = (foldr (\a f -> f . (\ys -> a:ys) ) (\x -> x)
xs) []

this probably needs some explaining! There are various ways
to view what is going on here: first notice that the fold is
building a function f: [a] -> [a] which is applied to the empty list to obtain a list. List concatenation is actually being replaced by function composition (represented by a full stop, where f . g = \x -> f (g x) .... which is constant time) to get a linear time algorithm out of essentially what is a naive reverse! There is generally an overhead of dealing with closures so this is likely to be, in practice, a small constant slower than a fast reverse.

Notice to replace concatenation of lists by function composition one replaces a list by a function, thus we want [a1,a2,a3] to becomes (\t -> a1:(a2:t)), in which one abstracts the tail. This is very similar to holding a pointer to the (pointer to the) tail of the list.