BACHMAN NARICI FUNCTIONAL ANALYSIS PDF

Uh-oh, it looks like your Internet Explorer is out of date. For a better shopping experience, please upgrade now. Javascript is not enabled in your browser. Enabling JavaScript in your browser will allow you to experience all the features of our site. Learn how to enable JavaScript on your browser.

Author:Tukasa Kigalkis
Country:Bolivia
Language:English (Spanish)
Genre:Personal Growth
Published (Last):16 April 2009
Pages:439
PDF File Size:8.94 Mb
ePub File Size:16.9 Mb
ISBN:693-4-15235-830-2
Downloads:73550
Price:Free* [*Free Regsitration Required]
Uploader:Akinoll



By George Bachman and Lawrence Narici. In this chapter, some of the conventions and terminology that will be adhered to throughout this book will be set down.

They are listed only so that there can be no ambiguity when terms such as vector space or linear transformation are used later, and are not intended as a brief course in linear algebra. After establishing our rules, certain basic notions pertaining to inner product spaces will be touched upon.

In particular, certain inequalities and equalities will be stated. Then a rather natural extension of the notion of orthogonality of vectors from the euclidean plane is defined. Just as choosing a mutually orthogonal basis system of vectors was advantageous in two- or three-dimensional euclidean space, there are also certain advantages in dealing with orthogonal systems in higher dimensional spaces.

A result proved in this chapter, known as the Gram-Schmidt process, shows that an orthogonal system of vectors can always be constructed from any denumerable set of linearly independent vectors. Focusing then more on the finite-dimensional situation, the notion of the adjoint of a linear transformation is introduced and certain of its properties are demonstrated.

We shall often describe the above situation by saying that X is a vector space over F and shall rather strictly adhere to using lower-case italic letters especially x , y , z , … for vectors and lower-case Greek letters for scalars. The additive identity element of the field will be denoted by 0, and we shall also denote the additive identity element of the vector addition by 0. We feel that no confusion will arise from this practice, however. A third connotation that the symbol 0 will carry will be the transformation of one vector space into another and mapping every vector of the first space into the zero vector of the second.

The multiplicative identity of the field will be denoted by 1 and so will that transformation of a vector space into itself which maps every vector into itself.

A collection of vectors x 1, x 2, …, xn is said to be linearly independent if the relation. An arbitrary collection of vectors is said to be linearly independent if every finite subset is linearly independent. A linearly independent set of vectors with the property that any vector can be expressed as a linear combination of some subset is called a basis for the vector space or sometimes a Hamel basis.

Note that a linear combination is always a finite sum, which means that, even though there may be an infinite number of vectors in the basis, we shall never express a vector as an infinite sum—the fact of the matter being that infinite sums are meaningless unless the notion of limit of a sequence of vectors has somehow been introduced to the space. A subset of a vector space that is also a vector space, with respect to the same operations, is called a subspace.

In general, if a collection of vectors, linearly independent or not, is such that any vector in the space can be expressed as a linear combination of them, the collection is said to span the space. Assuming that things take place in the vector space X if M 1, M 2, …, Mn are linearly independent and. Equation 1. One will sometimes see another adjective involved in the description of the above situation—it is often called an internal direct sum decomposition.

This is because a new vector space V can be formed from two others X and Y over the same field ; this new one is called the external direct sum of X and Y. What is done in this case. We define.

Adopting this convention then, given a direct sum decomposition, we need not bother to enquire : Are the terms in the direct sum distinct vector spaces or are they subspaces of the same vector space? Given a mapping A which maps a vector space X into a vector space Y over the same field ,. A is called a linear transformation. Often when linear transformations are involved, we shall omit the parenthesis and write Ax instead of A x.

If the linear transformation is 1 : 1, it is called an isomorphism of X into Y. If A is 1 : 1 and onto, the two spaces are said to be isomorphic. Two isomorphic spaces are completely indistinguishable as abstract vector spaces and in some cases, isomorphic spaces are actually identified.

An instance where such an identification is convenient was mentioned in the preceding paragraph. Having given meaning to the formation of linear combinations of linear transformations, let us look at the special case of all linear transformations mapping a vector space into itself. Equivalent terminology also used is characteristic vector and proper vector instead of eigenvector, and characteristic value and proper value for eigenvalue.

Such a transformation is called a projection. As it happens, a projection always determines a direct sum decomposition of X into cf. One notes in this case that. We do not mean by this that every vector in L is left fixed by A ; it is just mapped into some vector in L.

Two other concepts that we shall make extensive use of are greatest lower bounds and least upper bounds. The numbers al and ar have the properties that al can be replaced by anything smaller, whereas ar can be replaced by anything larger, and we shall still obtain lower and upper bounds , respectively, for U.

A fundamental property of the real numbers, however, is that among all the lower bounds, there is a greatest one in the sense that at least one point of U lies to the left of any number greater than the greatest lower bound.

Aside from this material pertaining to functions, there are many other notions from set theory that are absolutely indispensable to the study of functional analysis. We feel it is unnecessary to discuss any of the notions from set theory and consider it sufficient to list the pertinent notation we shall use in the List of Symbols. No doubt the reader has come in contact with them for example, unions, intersections, membership before. We make one exception to this rule with regard to the concept of an equivalence relation.

In this case the relation is called an equivalence relation. As a mnemonic device for recalling the three defining properties of an equivalence relation, one sometimes sees it referred to as an RST relation. Examples of equivalence relations are equality of real numbers, equality of sets subsets of a given set, say , congruence among the class of all triangles, and congruence of integers modulo a fixed integer.

For a proof and further discussion one might look at Ref. Suppose X is a real or complex vector space; that is, suppose the underlying scalar field is either the real or complex numbers R or C. We now make the following definition. It is noted that by property I1 that x , x must always be real, so this requirement always makes sense. A real or complex vector space with an inner product defined on it will be called an inner product space ; one often sees this abbreviated as i.

Inner product spaces are also called pre-Hilbert spaces. To prove this, noting that this is true for every vector in the space, take x equal to y and apply part I3 of the definition. In the following examples, the scalar field can be taken to be either the real numbers or the complex numbers. The verification that the examples have the properties claimed is left as an exercise for the reader.

As the inner product of the two vectors. Cn with this inner product is referred to as complex euclidean n-space. With R in place of C one speaks of real euclidean n-space. As the inner product of any two vectors f x and g x in this space, we shall take. We shall sometimes use the same notation to denote the real space of real-valued continuous functions on [ a , b ]. We shall always specifically indicate which one is meant however.

Theorem 8. The following example is presented for those familiar with some measure theory. Since it is not essential to the later development, the reader should not be disturbed if he does not have this background. Let Y be a set and let S be a collection of subsets of Y with the following properties:.

Analogous to the case of C [ a , b ] Example 1. As such, it partitions the set into disjoint equivalence classes, where a typical class is given by. Let us now restrict our attention to only those classes whose representatives are square-summable on E such that.

Note that the zero vector of the set of all square-summable functions on E is the function that is 0 everywhere on E. Since the only nonnegative measurable functions on E satisfying the above equality are those that equal 0 almost everywhere on E , we see that this problem is circumvented when one deals with classes. Sometimes we shall wish to deal with only real-valued functions and to view the collection as a real vector space. Then, for the equivalence classes of square-integrable functions complex-valued on [ a , b ] we can take, as inner product between two classes f and g ,.

This space is usually referred to as L 2 a , b. The proof of the following, very important result can be found in the references to this lecture and will only be stated here. The following three results represent straightforward, although somewhat arduous, tasks to prove and will also only be stated. It should be noted from the above two results that if we know the norm in an inner product space, the inner product can be recovered.

This theorem derives its name from the analogous statement one has in plane geometry about the opposite sides of a parallelogram. If x and y represent two vectors from real euclidean 3-space, it is well known that the angle between these two lines in space has its cosine given by. The next result follows immediately from just expanding the quantities involved and using the hypothesis.

Then S is said to be an orthogonal set of vectors. In this case S is said to be an orthonormal set of vectors. Let the vector space be l 2 and consider the set of vectors from this space. This collection or any subset of this collection, according to the inner product defined in Example 1. The collection or any subset thereof. Let X be an inner product space and let S be an orthogonal set of nonzero vectors.

Then S is a linearly independent set. Before passing to the proof, the reader is asked to recall that an arbitrary set of vectors is called linearly independent if and only if every finite subset is linearly independent. In light of this, choose any finite collection of vectors, x 1, x 2, …, xn from S and suppose that. Thus the collection is linearly independent. Knowing that there exists a basis for any finite-dimensional vector space, the following result assures that among other things, in such a space, there is an orthonormal basis.

The proof will be by induction. Further, the spaces that x 1 and y 1 span must be identical, because the two vectors are linearly dependent. Our job now is to construct the n th vector with the same properties.

To this end, consider the vector. Suppose it were possible for w to be zero.

ISTORIJOS KONSPEKTAI PDF

Functional Analysis

.

AUTOBIOGRAFIA JOGINA PDF

.

GLOBE SP30 PDF

.

ADRIANO SOFRI 43 ANNI PDF

.

Related Articles