641
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Further characterization based on conditional expectations: new and extended findings

ORCID Icon
Article: 2201966 | Received 07 Mar 2022, Accepted 05 Apr 2023, Published online: 17 Apr 2023

ABSTRACT

On the basis of conditional expectation of a random variable function, we present few characterization findings in this study. For a given function g, g(x)=E(D(X)|Xx), we present necessary and sufficient conditions for characterization results in terms of a single function D(x). Some of these findings are completely novel, while others are expansions of previously published characterizations.

1. Introduction

Many scholars have explored probability distribution characterizations. According to [Citation1], a characterization theorem occurs in probability and statistics when a certain distribution is the only one that fits a specified property. Furthermore, a characterization, according to [Citation2], is a specific statistical or distributional attribute of a statistic or statistics that uniquely defines the associated stochastic model. Some academics suggested that before applying a probability distribution to real-world data, it should first be characterized under certain conditions. Several kinds of characterizations have been investigated for instance characterizations based on hazard functions, conditional expectations, truncated moments, order statistics and record values. Several authors, including [Citation3–8], have explored characterizations based on conditional expectations. In general, see, for example, for a survey or further information on these characterization subjects [Citation1, Citation2, Citation9–19] and references therein, to name a few, however there are numerous others.

We present some new and extended characterizations based on conditional expectations, motivated by the importance of probability distribution characterizations. The results of characterization are expressed in terms of a single function expression H of X, say H(x).

2. Main results

In this paper, we present characterizations based on the following conditional expectations in the form E(D(X)|Xx) where D is a mathematical expression of the function H: (1) E((H(X))a|Xx)=ba2F1(a,ar;ar+1;1bH(x)),(1) (2) E((H(X))a|Xx)=r+s (H(x))a,(2) (3) E((H(X))a|Xx)=r(H(x))a+s(H(x))a1,(3) (4) E(H(X)|Xx)=r(H(x))a+H(x),(4) (5) E((H(X))ae[c(H(X))b]|Xx)=r+s(H(x))a e[c(H(x))b],(5) (6) E([(H(X))bln(H(X))c]a|Xx)=r+s[(H(x))b ln(H(x))c]a,(6) such that H is a continuous and function that is differentiable on the interval (L,U).

We are expanding certain works founded by [Citation20–22] as presented in Propositions 2.1–2.4, respectively, by using Equations (Equation1)–(Equation4). On the other side, as far as we know, the Equations (Equation5) and (Equation6) introduce new characterizations based on conditional expectations, as shown in the corresponding Propositions 2.5 and 2.6, respectively.

Proposition 2.1

Let X:Ω(L,U) be a random variable that is continuous with cdf F. Let H(x) be function that is differentiable on (L,U) with limxL+H(x)=H(L) and limxUH(x)=b if a>r and H(x)>b2. Then, E((H(X))a|Xx)=ba2F1(a,ar;ar+1;1bH(x)) if and only if (7) H(x)=b1+(bH(L)H(L))[1FX(x)](1ar)(7) where x(L,U) and 2F1 is the Gaussian hypergeometric function defined as 2F1(a,b;c;z)=i=0(a)i(b)i(c)izii!, see Equation (15.1.1) at page 556 in [Citation23].

Proof.

From Equation (Equation1), we have xU(H(t))a fX(t)dt=ba (1FX(x))2F1×(a,ar;ar+1;1bH(x)) Using derivatives from both sides in relation to x, we obtain [ba 2F1(a,ar;ar+1;1bH(x))(H(x))a]×fX(x)=(ab(a+1)(ar)(ar+1))×(H(x)(H(x))2)(1FX(x))2F1×(a+1,ar+1;ar+2;1bH(x)), where ddx2F1(a,b;c;q(x))=(abc)q(x)2F1(a+1,b+1;c+1;q(x)), see Equation (15.2.1) at page 557 in [Citation23].

From which we have fX(x)1FX(x)=(ab(a+1)(ar)(ar+1))×H(x)2F1(a+1,ar+1;ar+2;1bH(x))(H(x))2[ba2F1(a,ar;ar+1;1bH(x))(a,ar;ar+1;1bH(x))(H(x))a] Integrating both sides from L to x, LxfX(t)1FX(t)dt=(ab(a+1)(ar)(ar+1))×Lx{H(t)2F1(a+1,ar+1;ar+2;1bH(t))(H(t))2[ba2F1(a,ar;ar+1;1bH(t))(a,ar;ar+1;1bH(t))(H(t))a]}dt. Let u=1FX(t),v=(1H(t)), then

(8) 1FX(x)1duu=(ab(a+1)(ar)(ar+1))×(1H(x))(1H(L))×{2F1(a+1,ar+1;ar+2;1bv)[ba2F1(a,ar;ar+1;1bv)(1v)a]}dvln(1)ln(1FX(x))=b(ar)×(1H(x))(1H(L)){[bv2F1(1,1r;ar+1;1bv)1](bv1)[(bv)a2F1(a,ar;ar+1;1bv)1]}dv,(8) where 2F1(a+1, ar+1; ar+2; 1bv)=(ar+1)[bv2F1(1,1r;ar+1;1bv)1]a(bv1)(bv)a by using Equations (1.110), (9.131.1) and (9.137.11) at pages 25, 1008 and 1010, respectively in [Citation24]. Therefore, ln(1FX(x))=b(ar)(1H(x))(1H(L))×{[bv2F1(1,1r;ar+1;1bv)1](bv1)[bv2F1(1,1r;ar+1;1bv)1]}dv, where [(bv)a2F1(a, ar; ar+1; 1bv)1]= [bv2F1(1,1r;1+ar;1bv)1], see Equation (9.131.1) at page 1008 in [Citation24]. From which we have ln(1FX(x))=b(ar)(1H(x))(1H(L))1(bv1)dv. Let w = bv−1 then after simplification, we have (9) FX(x)=1exp{(ar)[ln[(bH(x)H(x))(H(L)bH(L))]]}FX(x)=1[(bH(x)H(x))(H(L)bH(L))](ar) (bH(L)H(L))[1FX(x)](1(ar))=(bH(x)1)H(x)=b1+(bH(L)H(L))[1FX(x)](1(ar))(9) Conversely, if Equation (Equation7) holds, then E((H(X))a|Xx)=xU{1+(bH(L)H(L))[1FX(t)](1ar)b}(a)×fX(t)1FX(x)dt=ba1FX(x)i=0((a)i)(bH(L)H(L))i×xU[1FX(t)](iar)fX(t)dt. Let u=1FX(t), we have E((H(X))a|Xx)=ba1FX(x)i=0(ara+ir)((a)i)×(bH(L)H(L))i[u](a+irar)|01FX(x)E((H(X))a|Xx)=bai=0(ara+ir)((a)i)(bH(L)H(L))i×[1FX(x)](iar) Now, (10) (mn)=m(m1)(mn+1)n!.(10) We obtain E((H(X))a|Xx)=bai=0(ara+ir)×((a)(a1)(ai+1)i!)×(bH(L)H(L))i[1FX(x)](iar)=bai=0(ara+ir)×((1)i(a)(a+1)(a+i1)i!)×(bH(L)H(L))i[1FX(x)](iar) Also, (11) (m)n=m(m+1)(m+n1).(11) From which we have E((H(X))a|Xx)=bai=0(ara+ir)((1)i(a)ii!)×(bH(L)H(L))i[1FX(x)](iar)=bai=0(ara+ir)((1)i(a)ii!(ar)i(ar)i)×(bH(L)H(L))i[1FX(x)](iar) Moreover, (12) (m+1)n=(m+n)(m)nm.(12) Hence, E((H(X))a|Xx)=bai=0((1)i(a)ii!(ar)i(ar+1)i)×(bH(L)H(L))i[1FX(x)](iar)=ba 2F1(a,ar;ar+1;(bH(L)H(L))[1FX(x)](1ar)), where the Gaussian hypergeometric function defined as 2F1(a,b;c;z)=i=0(a)i(b)i(c)izii!. E((H(X))a|Xx)=ba2F1(a,ar;ar+1;(bH(L)H(L))[1FX(x)](1ar)), By using Equation (Equation9). E((H(X))a|Xx)=ba2F1(a,ar;ar+1;1bH(x)),

Remark 2.1

  1. If r = 1 and using Equations (Equation10) and (Equation11) then from Equation (Equation1) after simplification and using Equation (1.110) at page 25 in [Citation24], we obtain: ba2F1(a,a1;a;1bH(x))=b (H(x))a1.

  2. Taking, for example, r = 1, a=b=δ>r and limxL+H(x)=H(L)=δ/2>1/2 then we obtain equation (44) in Proposition 10 in [Citation22].

Proposition 2.2

Let X:Ω(L,U) be a random variable that is continuous with cdf F. Let H(x) be a function that is differentiable on (L,U) with limxL+H(x)=H(L) and limxUH(x)={r(1s)}(1a) if 0<s<1 and r>(1s)(H(x))a. Then, E((H(X))a|Xx)=r+s (H(x))a if and only if (13) H(x)={[r+(s1)(H(L))a[1FX(x)](s1s)]r(s1)}(1a)(13) where x(L,U).

Proof.

If Equation (Equation2) holds, we have xU(H(t))a fX(t)1FX(x)dt=r+s (H(x))axU(H(t))a fX(t)dt=(1FX(x))[r+s (H(x))a] Using derivatives from both sides in relation to x, we obtain [r+(s1)(H(x))a]fX(x)=a s (1FX(x))(H(x))a1H(x) from which we have fX(x)1FX(x)=a s (H(x))a1H(x)[r+(s1)(H(x))a] Integrating both sides from L to x, LxfX(t)1FX(t)dt=sLxa(H(t))a1H(t)[r+(s1)(H(t))a]dt. Let u=1FX(t) and v=[r+(s1)(H(t))a], then (14) ln(u)|1FX(x)1=(ss1)ln(v)|r+(s1)(H(L))ar+(s1)(H(x))aln(1FX(x))=(ss1)×ln[r+(s1)(H(x))ar+(s1)(H(L))a]FX(x)=1[r+(s1)(H(x))ar+(s1)(H(L))a](s1s)×r+(s1)(H(x))a=[r+(s1)(H(L))a]×[1FX(x)](1ss)H(x)={[r+(s1)(H(L))a[1FX(x)](s1s)]r(s1)}(1a)(14) Conversely, if Equation (Equation13) holds, then E((H(X))a|Xx)=1(s1)[1FX(x)]×xU{[r+(s1)(H(L))a[1FX(t)](s1s)]r}fX(t)dt. Let u=1FX(x), we have

E((H(X))a|Xx)=1(s1)[1FX(x)]×01FX(x){[r+(s1)(H(L))a[u](s1s)]r}du=1(s1)[1FX(x)]×{s[r+(s1)(H(L))a][u](1s)|01FX(x)ru|01FX(x)}=1(s1){s[r+(s1)(H(L))a][1FX(x)](1ss)r} Using Equation (Equation14), we get E((H(X))a|Xx)=1(s1){(s1)r+s(s1)(H(x))a}=r+s (H(x))a.

Remark 2.2

As special cases see, for example, Proposition 2.5 page 22–23 in [Citation21].

  1. H(L)=1,a=1,r=0 and s=δ

  2. H(L)=1,a=δ,r=0 and s=ϵ

  3. H(L)=0,a=1,r=ϵ and s=1ϵ

Remark 2.3

Taking, for example, a = 1, r = c, s = 1−c and limxL+H(x)=H(L)=1c, we obtain Equation (1) of Proposition 2.1 in [Citation20].

Proposition 2.3

Let X:Ω(L,U) be a random variable that is continuous with cdf F. Let H(x) be a function that is differentiable on (L,U) with limxL+H(x)=H(L)>0. Then, for r1,s>(1r)H(L) and a>1, E((H(X))a|Xx)=r(H(x))a+s(H(x))a1 implies (15) FX(x)={1(H(L)H(x))(a1)((r1)H(L)+s(r1)H(x)+s)((a+r1)(r1))if r>11(H(L)H(x))(a1)e(as)(H(L)H(x))if r=1(15) where x(L,U).

Proof.

For r>1 and from Equation (Equation3), we have xU(H(t))afX(t)dt=[1FX(x)]×[r(H(x))a+s(H(x))a1] Using derivatives from both sides in relation to x, we obtain fX(x)(H(x))a1[(r1)H(x)+s]=H(x)[1FX(x)]×(H(x))a2[a r H(x)+(a1)s] from which we have fX(x)1FX(x)=H(x)[a r H(x)+(a1)s]H(x)[(r1)H(x)+s] fX(x)1FX(x)=H(x){a r[(r1)H(x)+s]+(a1)sH(x)[(r1)H(x)+s]} fX(x)1FX(x)=H(x){a r[(r1)H(x)+s]+(a1)H(x)(a1)(r1)[(r1)H(x)+s]} fX(x)1FX(x)=H(x){a+r1[(r1)H(x)+s]+(a1)H(x)} Integrating both sides from L to x, LxfX(t)1FX(t)dt=Lx{(a+r1)H(t)[(r1)H(t)+s]+(a1)H(t)H(t)}dt. Let u=1FX(t),v=(r1)H(t)+sandw=H(t), then for r1

(16) 1FX(x)1duu=((a+r1)(r1))(r1)H(L)+s(r1)H(x)+sdvv+(a1)H(L)H(x)dwwln(1)ln(1FX(x))=((a+r1)(r1))×[ln((r1)H(x)+s)ln((r1)H(L)+s)]+(a1)×[ln(H(x))ln(H(L))]ln(1FX(x))=ln{(H(L)H(x))(a1)((r1)H(L)+s(r1)H(x)+s)((a+r1)(r1))}FX(x)=1(H(L)H(x))(a1)((r1)H(L)+s(r1)H(x)+s)((a+r1)(r1)).(16) For r = 1, limr1FX(x)=1(H(L)H(x))(a1)limr1×(H(L)H(x)H(x)+s(r1)+1)((a+r1)(r1)). Let n=a/(r1), then limr1FX(x)=limnFX(x)=1(H(L)H(x))(a1)limn(1+H(L)H(x)H(x)+nsa)(1+n)=1(H(L)H(x))(a1)×limn(1+(as)(H(L)H(x))(as)H(x)+n)n. Let m=(as)H(x)+n, then limr1FX(x)=limnFX(x)=limmFX(x)=1(H(L)H(x))(a1)×limm(1+(as)(H(L)H(x))m)m(as)H(x)=1(H(L)H(x))(a1)×limm(1+(as)(H(L)H(x))m)m. Therefore, (17) limr1FX(x)=limnFX(x)=limmFX(x)=1(H(L)H(x))(a1)e(as)(H(L)H(x)),(17) where limm(1+ym)m=ey.

For r>1 and from Equation (Equation16), (18) ((r1)H(x)+s)(H(x))((1a)(1r)(a+r1))=((r1)H(L)+s)×(H(L))((1a)(1r)(a+r1))[1FX(x)]((1r)(a+r1))×(H(x)+s(r1))(H(x))((1a)(1r)(a+r1))=(H(L)+s(r1))×(H(L))((1a)(1r)(a+r1))[1FX(x)]((1r)(a+r1))(18) It is difficult to obtain H(x) analytically in closed form from Equation (Equation18) but we will get it numerically as follows: Letδ=(H(L)+s(r1))(H(L))((1a)(1r)(a+r1))×[1FX(x)]((1r)(a+r1)),b=((1a)(1r)(a+r1)),ϑ=s(r1) and y=H(x). Therefore, from Equation (Equation18), we have (y+ϑ)yb=δ. Now, let φ(y)=(y+ϑ)ybδ=0.So, φ(y)=(y+b(ϑ+y))y(b1). By using the Newton-Raphson method as follows: yn+1=ynφ(yn)φ(yn). We obtain the root z such that φ(z)=0, i.e. H(x)=z.

For r = 1 and from Equation (Equation17), (H(x))(1a)e(as)H(x)=(H(L))(1a)e(as)H(L)[1FX(x)]{(H(x))(1a)e(as)H(x)}(1(1a))={(H(L))(1a)e(as)H(L)[1FX(x)]}(1(1a))(as(a1))H(x)e(as(a1))H(x)=(as(a1))×H(L)e(as(a1))H(L)[1FX(x)](1(1a))(as(a1))H(x)=W((as(a1))×H(L)e(as(a1))H(L)[1FX(x)](1(1a))), where W() is the Lambert function. Hence, H(x)=(s(a1)a)W((as(a1))×H(L)e(as(a1))H(L)[1FX(x)](1(1a)))

Remark 2.4

Taking, for example, (L,U)=(a,b), r = 1, s = c and H(L)=γ>0, we obtain Equation (7) of Proposition 2.3 in [Citation20].

Remark 2.5

Taking, for example, (L,U)=(a,b), r = 1, s = 1−c and H(L)=γ>0, we obtain Equation (11) of Proposition 2.5 in [Citation20].

Proposition 2.4

Let X:Ω(L,U) be a random variable that is continuous with cdf F. Let H(x) be a function that is differentiable on (L,U) with limxL+H(x)=H(L)>0 and limxUH(x)=. Then, for r>0 anda>1, E(H(X)|Xx)=r (H(x))a+H(x) implies (19) FX(x)=1(H(L)H(x))a×exp[(H(L))1a(H(x))1ar (1a)](19) where x(L,U).

Proof.

From Equation (Equation4), we have xUH(t)fX(t)dt=[1FX(x)][r (H(x))a+H(x)] Using derivatives from both sides in relation to x, we obtain r fX(x)(H(x))a=[1FX(x)]×[a r (H(x))a1H(x)+H(x)] from which we have fX(x)1FX(x)=[a r (H(x))a1H(x)+H(x)]r(H(x))a Integrating both sides from L to x, LxfX(t)1FX(t)dt=Lx{[a r (H(t))a1H(t)+H(t)]r(H(t))a}dt. Let u=1FX(t) and v=H(t), then ln(1)ln(1FX(x))=a [ln(H(x))ln(H(L))]+(1r (1a))[(H(x))1a(H(L))1a]ln(1FX(x))=ln{(H(x)H(L))aexp{(1r (1a))×[(H(x))1a(H(L))1a]}}FX(x)=1(H(L)H(x))a×exp[(H(L))1a(H(x))1ar (1a)](H(x))a exp((H(x))1ar (1a))=exp((H(L))1ar (1a))(H(L))a[1FX(x)](1ra)(H(x))1a exp((H(x))1ara)=(1ra)×exp((H(L))1ara)(H(L))1a[1FX(x)](a1a)(1ra)(H(x))1a=W((1ra)exp((H(L))1ara)(H(L))1a[1FX(x)](a1a)), where W() is the Lambert function. Therefore, (H(x))1a=ra W((1ra)exp((H(L))1ara)×(H(L))1a[1FX(x)](a1a)). Hence, H(x)={ra W((1ra)exp((H(L))1ara)×(H(L))1a[1FX(x)](a1a)((H(L))1ara))}1(1a)

Remark 2.6

Taking, for example, a = 2, r = 1 and H(L)=γ>0, we obtain Equation (15) of Proposition 2.7 in [Citation20].

Proposition 2.5

Let X:Ω(L,U) be a random variable that is continuous with cdf F. Let H(x) be a function that is differentiable on (L,U) with limxL+H(x)=H(L) and limxUH(x)={(abc)W((bca)[r(1s)](ba))}(1b). Then, for 0<s<1,0r<1s and a,b,c>0, E((H(X))a e[c (H(X))b]|Xx)=r+s (H(x))a e[c (H(x))b] implies (20) H(x)={(abc)W((bca)[[r+(s1)(H(L))ae[c (H(L))b]][1FX(x)](1ss)r(s1)]×[[r+(s1)(H(L))ae[c (H(L))b]][1FX(x)](1ss)r(s1)](ba))}(1b)(20) where x(L,U) and W() is the Lambert function.

Proof.

From Equation (Equation5), we have xU(H(t))a e[c (H(t))b] fX(t)dt=[1FX(x)] [r+s (H(x))a e[c (H(x))b]] Using derivatives from both sides in relation to x, we obtain fX(x)[r+(s1)(H(x))a e[c (H(x))b]]=s [1FX(x)] H(x)(H(x))a1×e[c (H(x))b][a+b c (H(x))b] from which we have fX(x)1FX(x)=s H(x)(H(x))a1 e[c (H(x))b][a+b c (H(x))b][r+(s1)(H(x))a e[c (H(x))b]] Integrating both sides from L to x, LxfX(t)1FX(t)dt=sLx{H(t)(H(t))a1 e[c (H(t))b][a+b c (H(t))b][r+(s1)(H(t))a e[c (H(t))b]]}dt. Let u=1FX(t) and v=[r+(s1) (H(t))ae[c (H(t))b]], then ln(1)ln(1FX(x))=(ss1)×{ln[r+(s1)(H(x))a e[c (H(x))b]]ln[r+(s1)(H(L))a e[c (H(L))b]]} FX(x)=1{[r+(s1)(H(x))a e[c (H(x))b]][r+(s1)(H(L))a e[c (H(L))b]]}(s1s)[r+(s1)(H(L))a e[c (H(L))b]][1FX(x)](1ss)=r+(s1)(H(x))a e[c (H(x))b] (H(x))a e[c (H(x))b]=[r+(s1)(H(L))a e[c (H(L))b]][1FX(x)](1ss)r(s1). Let δ=[r+(s1)(H(L))a e[c (H(L))b]][1FX(x)](1ss)r(s1). Then, (H(x))a e[c (H(x))b]=δ(bca)(H(x))b=W((bca)δ(ba)), where W() is the Lambert function. Hence, H(x)={(abc)W((bca)[[r+(s1)(H(L))a e[c (H(L))b]][1FX(x)](1ss)r(s1)]×[[r+(s1)(H(L))a e[c (H(L))b]][1FX(x)](1ss)r(s1)](ba))}(1b).

Proposition 2.6

Let X:Ω(L,U) be a random variable that is continuous with cdf F. Let H(x) be a function that is differentiable on (L,U) with limxL+H(x)=H(L)>0 and limxUH(x)=exp{(1b)W((bc)[r(1s)](1a))}. Then, for 0<s<1, 0r<1s anda, b, c>0, E([(H(X))b ln(H(X))c]a|[(H(X))b ln(H(X))c]aXxXx)=r+s [(H(x))b ln(H(x))c]a implies (21) H(x)=exp{W((bc)[[r+(s1)[(H(L))b ln(H(L))c]a][1FX(x)](1ss)r(s1)](1a))b}(21) where x(L,U) and W() is the Lambert function.

Proof.

From Equation (Equation6), we have xU[(H(t))b ln(H(t))c]a fX(t)dt=[1FX(x)] [r+s [(H(x))b ln(H(x))c]a] Using derivatives from both sides in relation to x, we obtain fX(x)[r+(s1)[(H(x))b ln(H(x))c]a]=a c s [1FX(x)] H(x)(H(x))b1×[(H(x))b ln(H(x))c]a1[1+b ln(H(x))] from which we have fX(x)1FX(x)=a c s H(x)(H(x))b1[(H(x))b ln(H(x))c]a1[1+b ln(H(x))][r+(s1)[(H(x))b ln(H(x))c]a] Integrating both sides from L to x, LxfX(t)1FX(t)dt=a c s×Lx{H(t)(H(t))b1[(H(t))b ln(H(t))c]a1[1+b ln(H(t))][r+(s1)[(H(t))b ln(H(t))c]a]}dt. Let u=1FX(t) and v=[r+(s1) [(H(t))bln(H(t))c]a], then ln(1)ln(1FX(x))=(ss1){ln[r+(s1)[(H(x))b ln(H(x))c]a]ln[r+(s1)[(H(L))b ln(H(L))c]a]} FX(x)=1{[r+(s1)[(H(x))b ln(H(x))c]a][r+(s1)[(H(L))b ln(H(L))c]a]}(s1s) [(H(x))b ln(H(x))c]a=[r+(s1)[(H(L))b ln(H(L))c]a][1FX(x)](1ss)r(s1). Let δ=[r+(s1)[(H(L))b ln(H(L))c]a][1FX(x)](1ss)r(s1). Then, [(H(x))b ln(H(x))c]a=δb ln(H(x))=W((bc)δ(1a)), where W() is the Lambert function. Hence, H(x)=exp{W((bc)[[r+(s1)[(H(L))b ln(H(L))c]a][1FX(x)](1ss)r(s1)](1a))b}.

3. Applications

As illustrations and without loss of generality, we apply three of these characterizations as follows:

  1. Using Proposition 2.1 where r=1, a=b=δ>1,limxL+H(x)=H(L)=δ2>1/2 and (L,U)=(0,), we have

    1. If F(x) is the cdf of TL-G distribution (see [Citation25]) given by F(x)=(G(x)(2G(x)))α=(1(1G(x))2)α, then Proposition 2.1 (using Equation (Equation7)) gives a characterization of TL-G distribution as follows: H(x)=δ[1+(1(1(1G(x))2)α)(1δ1)]1.

    2. If F(x) is the cdf of AGT-G distribution (see [Citation22]) given by F(x)=(1+λ)[1(1G(x))α]λ[1(1G(x))α]2=1(1G(x))α(1λ+λ(1G(x))α), then Proposition 2.1 (using Equation (Equation7)) gives a characterization of AGT-G distribution as follows: H(x)=δ[1+((1G(x))α(1λ(1δ1)+λ(1G(x))α))(1δ1)]1.

  2. Using Proposition 2.3 where r=1, a=δ and  limxL+H(x)=H(L)=1, we get

    1. If s = 1−c and H(x)=x where (L,U)=(1,) and limxUH(x)= then F(x)=1x(1δ)e(δ1c)(1x),

    2. If s = c and H(x)=1/x where (L,U)=(1,) and limxUH(x)=0 then F(x)=1x(δ1)e(δc)(11x),

    3. If s = c and H(x)=ex where (L,U)=(0,) and limxUH(x)=0 then F(x)=1exp{(δ1)x+(δc)(1ex)},

    4. If s = 1−c and H(x)=ex where (L,U)=(0,) and limxUH(x)= then F(x)=1exp{(1δ)x+(δ1c)(1ex)},

  3. Using Proposition 2.4 where r=1, a=2 and  limxL+H(x)=H(L)=1, we get

    1. If H(x)=x where (L,U)=(1,) and limxUH(x)= then F(x)=1x2e(1x1),

    2. If H(x)=ex where (L,U)=(0,) and limxUH(x)= then F(x)=1e(ex2x1).

For other Propositions, they may be treated in a similar fashion.

4. Conclusions

The question of the characterization of a distribution is important in many fields and has recently aroused the interest of many researchers. As a result, several characterization results have been published in the literature. The purpose of this study is to present several characterizations of the distribution in their generality in hoping they would be beneficial to researchers wishing to know whether their model meets the requirements of a certain underlying distribution.

Disclosure statement

No potential conflict of interest was reported by the author.

References

  • Koudou AE, Ley C. Characterizations of GIG laws: A survey. Probab Surv. 2014;11:161–176. DOI:10.1214/13-PS227
  • Nagaraja H, Characterizations of probability distributions. In: Springer handbook of engineering statistics. 2006. p. 79–95. London: Springer. DOI:10.1007/978-1-84628-288-1_4
  • Dimaki C, Xekalaki E. Towards a unification of certain characterizations by conditional expectations. Ann Inst Stat Math. 1996;48:157–168. DOI:10.1007/BF00049296
  • Ghitany M, Gupta R, Wang S. Some characterization results by conditional expectations and their applications to lindley-type distributions. Int J Probab Stat. 2017;7:86. DOI:10.5539/ijsp.v7n1p86
  • Khan A, Athar H, Yaqub M. Characterization of probability distributions through conditional expectation of function of two order statistics. Calcutta Stat Assoc Bull. 2001;51:259–266. DOI:10.1177/0008068320010309
  • Ruiz JM, Navarro J. Characterizations based on conditional expectations of the doubled truncated distribution. Ann Inst Stat Math. 1996;48:563–572. DOI:10.1007/BF00050855
  • Su JC, Huang WJ. Characterizations based on conditional expectations. Stat Pap. 2000;41:423. DOI:10.1007/BF02925761
  • Zoroa P, Ruiz J, Marín J. A characterization based on conditional expectations. Commun Stat Theory Methods. 1990;19:3127–3135. DOI:10.1080/03610929008830368
  • Ahsanullah M, Shakil M, Kibria BMG. Characterizations of folded student's t distribution. J Stat Distrib Appl. 2015;2:15. DOI:10.1186/s40488-015-0037-5
  • Ahsanullah M, Shakil M, Kibria BG. On a generalized raised cosine distribution: some properties, characterizations and applications. Moroc J Pure Appl Anal. 2019;5:63–85. DOI:10.2478/mjpaa-2019-0006
  • Ahsanullah M. Characterizations of univariate continuous distributions. Paris: Springer; 2017. DOI:10.2991/978-94-6239-139-0
  • Ahsanullah M, Shakil M. A note on the characterizations of pareto distribution by upper record values. Commun Korean Math Soc. 2012;27:835–842. DOI:10.4134/CKMS.2012.27.4.835
  • Ahsanullah M, Shakil M. Characterizations of continuous probability distributions occurring in physics and allied sciences by truncated moment. Int J Adv Stat Probab. 2015;3:100–114.
  • Ahsanullah M, Shakil M, Kibria BMG. Characterizations of continuous distributions by truncated moment. J Mod Appl Stat Methods. 2016;15:17. DOI:10.22237/jmasm/1462076160
  • Galambos J, Kotz S. Characterizations of probability distributions: a unified approach with an emphasis on exponential and related models. Vol. 675. Berlin: Springer; 2006. DOI:10.1007/BFb0069530
  • Hamedani G. Characterizations of recently introduced univariate continuous distributions II. Nova Science Publishers; 2019. (Mathematics research developments). Available from: https://books.google.com.sa/books?id=rtAywAEACAAJ.
  • Khan MI. Characterization of some continuous distributions by truncated moment. J Stat Manag Syst. 2021;0:1–10. DOI:10.1080/09720510.2021.1933709
  • Kotz S, Shanbhag DN. Some new approaches to probability distributions. Adv Appl Probab. 1980;12:903–921. DOI:10.2307/1426748
  • Shakil M, Ahsanullah M, MG Kibria B. Some characterizations and applications of a size-biased weighted distribution useful in lifetime modelling. J Stat Appl Probab. 2021;10:607–624. DOI:10.18576/jsap/100301
  • Hamedani G. A few characterizations of the univariate continuous distributions. JIRSS. 2016;15:63–71. Available from: https://iranjournals.nlai.ir/bitstream/handle/123456789/600330/C1AEA206AD3A27B2CEAF61EFEA8CB376.pdf?sequence=-1
  • Hamedani G, Mameli V. Characterizations of the generalized beta-generated family of distributions. J Stat Theory Appl. 2017;16:18–25. DOI:10.2991/jsta.2017.16.1.2
  • Merovci F, Alizadeh M, Hamedani GG. Another generalized transmuted family of distributions:properties and applications. Austrian J Stat. 2016;45:71–93. Available from: https://www.ajs.or.at/index.php/ajs/article/view/doi
  • Abramowitz M. Handbook of mathematical functions: with formulas, graphs, and mathematical tables. New York: Dover Publications; 1970. Available from: https://books.google.com.sa/books?id=4j2hwwEACAAJ.
  • Gradshteyn IS, Ryzhik IM. Table of integrals, series, and products. New York: Academic Press; 2007.
  • Al-Shomrani A, Arif O, Shawky A, et al. Topp–Leone family of distributions: some properties and application. Pakistan J Stat Oper Res. 2016;12:443–451. DOI:10.18187/pjsor.v12i3.1458