Wednesday, July 31, 2019

Macroeconomics Of Japan Essay

Japan is the greatest economy in Asia, in terms of GDP, as well as human resources and technology. The nation was once predicted to be the next superpower nation exceeding the United Sates and countries of the European Union. Today, it is the world’s third-largest economy after the United States and People’s Republic of China. It is also the second-largest economy by real GDP and market exchange rates. The economy is highly efficient and competitive especially in the services industry, which is originated from a good cooperation between the government and the industry, a strong work ethic and the mastery of high technology. Recent analysis however, revealed that the economy is currently under serious problems. Observers and even Japan’s own officials have admitted that the economy is no longer ‘first class’. There are even worries that Japan has no longer sustain the capacity to be one of the world’s greatest economies anymore, and the economy will slowly degrade into one of the typical Asian economies. Analysts stated that such an occurrence has happened before, when Argentina which were once considered one of the strongest economies in the world degraded into typical third world economies today. Is this the case with Japan? In this paper I am discussing the problems that stayed within Japan’s economy and elaborating their probable causes. Afterwards, I will elaborate the macroeconomic policies which have been performed by the Japanese government in response to these issues and how these policies have affected the economy. The period of discussion is 1997 -2007, which are the years after the ‘Japan economic bubble’ bursts, to the present day. II. Japan Economic Issues 1997-2007 II. 1. Background of the Issues – Japan Economic Bubble Japanese growth rates have been nothing less than spectacular for decades. In the 60’s the average real economic growth rate was 10%, in the 70’s it was 5% and in the 80’s it was 4%. Japanese financial system however, was based on a bureaucratic fiat. The government believes that by injecting sufficient amount of capital into the market, the economy will experience a rapid rate of growth. Thus, the financial system was set to inject cheap capital into the business sector (Hamada, 2004). In support of this policy, banks even reluctant to report –in bad loans. In short, companies were encouraged to borrow and expand continuously. Companies would then borrow using assets like land and then invest the money into the stock market. After the market rises, the company would have latent profits which will be used to buy more land and therefore, the cycle continues. These cycles were the origins of the huge real estate and stock market bubbles. These bubbles however, cannot be sustained forever, and when the Bank of Japan (BOJ) raised interests rates, the bubble bursts in 1989 and leaving commercial banks in Japan with a mountain of bad loans. II. 2. Stagnant Economic Growth Afterwards, assets prices began to decline rapidly. Japan’s economy was going through a long period of deflation since then, partly caused by the appreciation of yen. Because of this appreciation, the CPI increase rate dropped into negative in 1995. The expanding deflation caused Japan’s economy to remain in a static condition. Moreover, the deepening deflation was accompanied with weakening state of real economy like growth rates declines and increased unemployment rates. Between 1992 and 1994, real growth rates are below 1%. It even dropped toward a negative range in 1998. Jobless rate have also suffered a rise of 3. 4 % from 2 % in 1990 to 5. 4% in 2003. The economic downsizing in 1997 put Japanese economy into a new state of deflation (Oliver, 2002). II. 3. Deflationary Trap It was not considered serious until the inflation rate slipped to below zero in 1997. In this phase, observers believed that Japan was in a ‘deflationary trap’. However, because of various long-term considerations, the government has implemented policies to maintain inflation stable near the zero mark. In this situation however, the central bank cannot use its traditional instruments to deal with the issue. As a result, deflation deepens even further and the market intensified expectations toward further and longer period of deflation. Due to the increase in real rate of interest, consumer spending and corporate investments were discouraged. Unfortunately, the shrinking total demand in the macro economy further worsen the deflation. If not dealt with accordingly, this could lead into self-sustaining deflationary process (Campbell, 1992).

Man in a Corner Essay

Augustus Cain is a good person, despite his background and upbringing he was able to emerge through the narrative. Cain is a man in the corner; his conditions determine his values and morals. He lost himself to himself and his own society. Although, he has lost himself he evolves and turns into a â€Å"soul catcher† throughout this novel many times, one of the souls he’s caught was even his own. He evolved as a person through breaking the four guiding principles constantly that his father said for him to follow. The four guiding principles were that â€Å"one should always respect one’s property: that it was necessary to care for protect it, to never misuse it, as it will someday be called upon to care for and protect you† (White 31); â€Å"That a Negro was in many ways like a child and it was the moral duty of the white man to look after and guide them† (White 31); † That his very whiteness not only set him apart from and above them – morally, intellectually, physically – but that it also linked him in a blood bond with every other white man† (White 32); † Whites and Negroes were created by the Almighty to be separate† (White 32). Cain engages in a forbidden relationship with a black woman named Rosetta. This relationship tests Cain’s character, will, care and decisions. She tests Cain’s will during her bathing in the river. While Rosetta is bathing, Cain is half turned away; â€Å"he felt this to be some sort of test of will, a temptation he felt bound to renounce in order to prove to himself, that he wasn’t common, that he wasn’t like Preacher or Strofes. That he was different† (White 206). He is also attracted to her in this scene. Post Rosetta exiting the river and him glancing at her and having a hard time averting this stare. He says that â€Å"he felt shamed as a rumbling commenced down between his own legs† (White 207) as he looked between hers. When Preacher tries to rape Rosetta, Cain almost kills him out of protection for her. When Rosetta is kidnapped Cain goes and asks around for her, and says  Ã¢â‚¬Å"I’m trying to help her† (White 287) and pushes onward out of his determination to save her. After saving her Cain is captured by John Brown and says that he doesn’t plan to send Rosetta back to Eberly, and Brown believes him and sends him away to a settlement in Ohio called Gist. Cain and Rosetta are laying together in a cabin and Rosetta kisses him, they continue to kiss and begin to remove their clothes. Cain then thought â€Å"He knew that he was crossing a line that he could never cross back over again† (White 377) and they made love. Cain is different from the other men that are portrayed in this novel, that are to be thought of as of brothers to him because of the blood bond that they share. Cain is more sympathetic to blacks then the rest. When Preacher is beating Joseph, Cain hates preacher for his mindless cruelty and didn’t believe on harming anyone in his â€Å"profession† unless it was absolutely necessary. â€Å"He preferred using his wits rather than violence or force† (White 55). He also feels bad for Joseph, so much that he couldn’t ignore Preacher slashing cuts in his body. He says to him â€Å"Alright, that’ll be enough† (White 56) and kicks him. He also tells him to stop acting like a cur if he doesn’t want to be treated like one. He degrades him to a dog in this scene. He also tells Joseph that if he didn’t comply with them that Preacher would hurt him and that Cain himself â€Å"didn’t want that† (White 58). Post Preacher suggesting they sell the boy and Cain disagreeing and calling it common thievery. Preacher also says that he shouldn’t act so high and mighty and that â€Å"Slave catcher, blackbirder. They’re all the same in my book† (White 60). Cain responds that â€Å"No, we’re not all the same. I’m carrying out the law† (White 60). This was a point in the novel where Cain again separates himself from his â€Å"brothers† and explains that he’s only doing this because he has to survive and pay off his debt not out of the malicious intent like a blackbirder or most slave catchers. At the end of all this Cain gives the boy a dollar coin to pay him back for his eggs. Cain is compared metaphorically throughout the novel to the people that his whiteness is supposed to set himself apart from. Cain is a runaway. He ran away from the life of a farmer and slave-owner. â€Å"Cain had decided early on that he wasn’t cut out for the life of a farmer† (White 32). He instead  joins the military to basically escape this inheritance from his father. When he tells Rosetta this she says to him â€Å"that makes you a runaway, too, Cain.† (White 245). He also â€Å"runs away† from the problems in his world by drowning himself in the vices, which are also the very things that metaphorically â€Å"enslaves† him. He is an alcoholic who constantly drinks laudanum. He has a gambling problem and has sex constantly with prostitutes. He also was tired for working for people like Eberly. These kind of people were white slave owners â€Å"that thought their money made him their nigger† (White 11). Also Cain explains to Rosetta that he has to bring her back to Eberly although he doesn’t want to because of his honor. She says that â€Å"Honor. He done bought and paid for you just like me†; Cain responds â€Å"No one owns me†; Rosetta says â€Å"Oh, he own you, all right. The only difference between them was that she knows it and he doesn’t† (White 210). Cain is overall a different person at the end of this novel. He has involved in interracial affairs with a black woman. He hasn’t respected the technically â€Å"property† of another man which would be Rosetta to Eberly, by not returning her to him, which ofcourse was the right thing to do. He is also acknowledged by Rosetta and other people in the novel even John Brown, as a â€Å"good man†.

Tuesday, July 30, 2019

Corpus Linguistics Essay

Introduction This paper includes information about corpus linguistics, its connection with lexicology and translation. The latter is the most important one and I am keen on finding and introducing something which is mainly connected with my future profession. Frankly speaking that was not an easy journey but I am hopeful it is destined to be successful. A corpus is an electronically stored collection of samples of naturally occurring language. Most modern corpora are at least 1 million words in size and consist either of complete texts or of large extracts from long texts. Usually the texts are selected to represent a type of communication or a variety of language; for example, a corpus may be compiled to represent the English used in history textbooks, or Canadian French, or Internet discussions of genetic modification. Corpora are investigated through the use of dedicated software. Corpus linguistics can be regarded as a sophisticated method of finding answers to the kinds of questions linguists have always asked. A large corpus can be a test bed for hypotheses and can be used to add a quantitative dimension to many linguistic studies. It is also true, however, that corpus software presents the researcher with language in a form that is not normally encountered and that this can highlight patterning that often goes unnoticed. Corpus linguistics has also, therefore, led to a reassessment of what language is like. During this journey we will try to find out; What is Corpus Linguistics Corpus Linguistics Terms and Their Meanings History of Corpus Linguistics Resources and Methodologies for Corpus Linguistics, Corpora Translation Corpus Linguistics and Linguistic Theory, Corpus-Based Descriptions So fasten the seat belts we are flying! What is Corpus Linguistics? Corpus linguistics is a study of language and a method of linguistic analysis which uses a collection of natural or â€Å"real word† texts known as corpus. Corpus linguistics is used to analyse and research a number of linguistic questions and offers a unique insight into the dynamic of language which has made it one of the most widely used linguistic methodologies. Since corpus linguistics involves the use of large corpora that consist of millions or sometimes even billion words, it relies heavily on the use of computers to determine what rules govern the language  and what patters (grammatical or lexical for instance) occur. Thus it is not surprising that corpus linguistics emerged in its modern form only after the computer revolution in the 1980s. The Brown Corpus, the first modern and electronically readable corpus, however, was created by Henry Kucera and W. Nelson Francis as early as the 1960s. Corpus Linguistics Terms and Their Meanings Corpus (plural corpora). It refers to a collection of systematically or randomly collected texts of natural language which is electronically stored and processed. Corpus can consist of texts in a  single or multiple languages. It contains a large number of texts which allow the researchers to 1 / 6 analyse linguistic rules but the corpus does not represent the entire language, no matter how large it is. Multilingual corpus. Like its name suggests, multilingual corpus consists of texts in multiple languages. Parsed corpus (treebank). It is a collection of texts in naturally occurring language in which each sentence is parsed – syntactically analysed and annotated. Syntactic analysis is typically given in a tree-like structure which is why parsed corpus is also known as treebank. Parallel corpora. The term refers to a collection of texts which are translations of each other. Annotation. It refers to an extension of the text by addition of various linguistic information. Examples include parsing, tagging, etc. Annotation is often used in reference to corpora as opposed to annotated corpora which consist of plain text in the raw state. Collocation. It refers to a sequence or pattern in which the words appear together or co-occur. Concordance. The term encompasses a word or phrase and its immediate context. In corpus linguistics, concordance is used to analyse different use of a single word, word frequency and  phrases or idioms. Orthography. It is a standardised writing system of a particular language and includes various grammatical rules such as spelling, capitalisation and punctuation marks. Orthography can pose a problem in analysis of writing systems which use accents because the native speakers of these languages sometimes use alternative characters to the accented letters or omit them completely. Token. It is an occurrence of an individual word which is plays an important role in the so-called tokenisation that involves division of the text or collection of words into token. This method is often  used in the study of languages which do not delimit words with space. Lemmasation. The term derives from the word lemma which refers to a set of different forms of a single word such as laugh and laughed for example. Lemmasation is the process of grouping of the words that have the same meaning. Wildcard. It refers to special characters such as question mark (? ) or asterisk (*) which can represent a character or word. 3A perspective. It is a research method that is used in corpus linguistics which was introduced by S. Wallis and G. Nelson. 3A stands for annotation, abstraction and analysis. History of Corpus Linguistics  History of corpus linguistics is typically divided into two periods: – early corpus linguistics, also known as pre-Chomsky corpus linguistics and – modern corpus linguistics The early examples of corpus linguistics date to the late 19th century Germany. In 1897, German linguist J. Kading used a large corpus consisting of about 11 million words to analyse distribution of the letters and their sequences in German language. The impressively sized corpus that corresponds with the size of a modern corpus was revolutionary at the time. Other early linguists to use corpus to study language include Franz Boas (Handbook of Native  American Indian Languages, 1911), Zellig Harris (Methods in Structural Linguistics, 1951), Charles C. Fries (The structure of English, 1952), Leonard Bloomfield (Language, 1933), Archibald A. Hill and others, mostly American structural and field linguists. Some of them such as Fries and A. Aileen Traver also started to use corpus in pedagogical study of foreign language. In 1961, Henry Kucera and W. Nelson Francis from the Brown University started to work on the Brown University Standard Corpus of Present-Day American English, commonly known simply as the Brown Corpus which is the first modern, electronically readable corpus. It consists of 1 million word American English texts that are organised into 15 categories. For the modern standards of corpus linguistics, the Brown Corpus is kind of small, however, it is widely considered one of the most important works in history of corpus linguistics. But this was also the time of Chomsky’s criticism of corpus linguistics which would result in a period of decline. Chomsky rejected the use of corpus as a tool for linguistic studies, arguing that linguist must model language on competence instead of performance. And according to Chomsky, corpus does allow 2 / 6 language modelling on competence. Corpus linguistics was not abandoned completely, however, it was not until the 1980s when linguists began to show an increased interest in the use of corpus for research. The revival of corpus linguistics and its emergence in the modern form was greatly influenced by the advent of computers and network technology in the 1980s which allowed the linguists to use electronic language samples as well as electronic tools. The use of computers, however, dates back to the early 1970s when the Montreal French Project developed the first computerised form of spoken language, while Jan Svartvik began to work on the London-Lund corpus with the aid of the  Brown Corpus and the Survey of English Usage (SEU) at University College London. All mentioned works before the 1980s as well as the early examples of corpus linguistics paved the way to modern study of language on the basis of corpora as we know it today. The term corpus linguistics has been finally adopted after J. Aarts and W. Meijs published Corpus linguistics: Recent developments in the use of computer corpora in English language research in 1984. Resources and Methodologies for Corpus Linguistics, Corpora The basic resource for corpus linguistics is a collection of texts, called a corpus. Corpora can be of varying sizes, are compiled for different purposes, and are composed of texts of different types. All corpora are homogeneous to a certain extent; they are composed of texts from one language or one variety of a language or one register, etc. They also are all heterogeneous to a certain extent, in that at the very least they are composed of a number of different texts. Most corpora contain information in addition to the texts that make them up, such as information about the texts themselves, part-of- speech tags for each word, and parsing information. ? What Corpus Linguistics Does  Gives an access to naturalistic linguistic information. As mentioned before, corpora consist of â€Å"real word† texts which are mostly a product of real life situations. This makes corpora a valuable research source for dialectology, sociolinguistics and stylistics. Facilitates linguistic research. Electronically readable corpora have dramatically reduced the time needed to find particular words or phrases. A research that would take days or even years to complete manually can be done in a matter of seconds with the highest degree of accuracy. Enables the study of wider patterns and collocation of words. Before the advent of computers, corpus linguistics was studying only single words and their frequency. Modern technology allowed the study of wider patters and collocation of words. Allows analysis of multiple parameters at the same time. Various corpus linguistics software programmes, online marketing and analytical tools allow the researchers to analyse a larger number of parameters simultaneously. In addition, many corpora are enriched with various linguistic information such as annotation. Facilitates the study of the second language. Study of the second language with the use of natural  language allows the students to get a better â€Å"feeling† for the language and learn the language like it is used in real rather than â€Å"invented† situations. What Corpus Linguistics Does Not Does not explain why. The study of corpora tells us what and how happened but it does not tell us why the frequency of a particular word has increased over time for instance. Does not represent the entire language. Corpus linguistics studies the language by using randomly or systematically selected corpora. They typically consist of a large number of naturally occurring texts, however, they do not represent the entire language. Linguistic analyses that use the methods and tools of corpus linguistics thus do not represent the entire language. Searches, Software, and Methodologies Corpora are interrogated through the use of dedicated software, the nature of which inevitably reflects assumptions about methodology in corpus investigation. At the most basic level, corpus software: . searches the corpus for a given target item, 3 / 6 . counts the number of instances of the target item in the corpus and calculates relative frequencies, . displays instances of the target item so that the corpus user can carry out further investigation. It is apparent that corpus methodologies are essentially quantitative. Indeed, corpus linguistics has been criticized for allowing only the observation of relative quantity and for failing to expand the explanatory power of linguistic theory (for discussion, see Meyer, 2002: 2–5). It is shown in this article that corpus linguistics can indeed enrich language theory, though only if preconceptions about what that theory consists of are allowed to change. Here, however, we leave that argument aside as we review corpus investigation software in more detail. Corpus Linguistics and Linguistic Theory, Corpus-Based Descriptions. As has been noted, corpus linguistics is essentially a methodology or set of methodologies, rather than a theory of language description. Essentially, corpus linguistics means this: . looking at naturally occurring language; . looking at relatively large amounts of such language; . observing relative frequencies, either in raw form or mediated through statistical operations; . observing patterns of association, either between a feature and a text type or between groups of words. Reduced to its essence in this way, corpus linguistics appears to be ‘theory neutral,’ although the  practice of doing corpus linguistics is never neutral, as each practitioner defines what is meant by a ‘feature’ and what frequencies should be observed, in line with a theoretical approach to what matters in language. Approaches to the use of a corpus that essentially rely on the existence of categories derived from noncorpus investigations of language are sometimes referred to as ‘corpus based’ (Tognini-Bonelli, 2001). Studies of this kind can test hypotheses arising from grammatical descriptions based on intuition or on limited data. Experiments have been designed specifically to do this (Nelson et al., 2002: 257–283). For example, Meyer (2002: 7–8) describes work on ellipsis from a typological and psycholinguistic point of view that predicts that of the three possible clause locations of ellipsis in American spoken English, one will be much more frequent than the others. A corpus study reveals this to be an accurate prediction. On the other hand, the study of pseudo-titles mentioned in the section ‘Languages and Varieties’ shows how assumptions about language – in this instance about the influence of one variety of English on another –can be shown to be false. Biber et al. (1999: 7) comment that ‘‘corpus-based analysis of grammatical structure can uncover characteristics that were previously unsuspected. ’’ They mention as examples of this the surprisingly high frequency of complex relative clause constructions in conversation, and the frequency of simplified grammatical constructions in academic prose. A clearer integration between linguistic theory and corpus linguistics is demonstrated by Matthiessen’s work on probability (see the section ‘Probability’). This work takes its categories from an existing description of English (Halliday’s (1985) systemic functional  grammar), but the corpus study was more integral to the theory, as it was the only way that statements about probability of occurrence of each item in the system could be made with accuracy. Corpus-Driven Descriptions However, more radical challenges to language description can be found. Sinclair (1991, 2004) argues that the kind of patterning observable in a corpus (and nowhere else) necessitate descriptions of a markedly different kind from those commonly available. Both the descriptions and the theories that they in turn inspire are, in Tognini-Bonelli’s (2001) terms, ‘‘corpus driven. ’’ Some  of the challenges to tradition that corpus-driven theories involve are these: . Lexis and grammar are not distinct, and grammar is not an abstract system underlying language . Choice of any kind is heavily restricted by choice of lexis . Meaning is not atomistic, residing in words, but prosodic, belonging to variable units of meaning and always located in texts. 4 / 6 Evidence for these claims is presented in the section ‘Observing patterned behavior’ above. The notion of pattern grammar focuses on the way that different lexical items behave differently in terms of how they are complemented. Grammatical generalizations about complementation cannot be made without describing that individual lexical behavior. Similarly, choice between features such as ‘positive’ and ‘negative’ depends to some extent on lexical item, as some verbs (such as afford) occur in the negative much more frequently than most. In other words, the probability of any grammatical category’s occurring is strongly affected not only by the register but also by the lexis used. Finally, the evidence of phraseology is that it makes more sense to see meaning as belonging to phrases than to individual words. Findings such as these have led many writers to see a need for descriptions of language that are radically different from those currently available. Sinclair (1991, 2004) proposes, for example, that meaning be seen as belonging to ‘units of meaning,’ each unit being describable in the way set out in He criticized conventional grammar for distinguishing between structures (a series of ‘slots’) and lexis (the ‘fillers’), such that it appears that any slot can be filled by any filler: there are no restrictions other than what the speaker wishes to say. This is clearly sometimes the case, and  when it is, Sinclair Translation Corpora can be used to train translators, used as a resource for practicing translators, and used as a means of studying the process of translation and the kinds of choices that translators make. Parallel corpora are often used in these applications, and software exists that will ‘align’ two corpora such that the translation of each sentence in the original text is immediately identifiable. This allows one to observe how a given word has been translated in different contexts. One interesting finding is that apparently equivalent words – such as English go and Swedish ga ° , or  English with and German mit (Viberg, 1996; Schmied and Fink, 2000) – occur as translations of each other in only a minority of instances. This suggests differences in the ways those languages use the items concerned. More generally, examination of parallel corpora emphasizes that what translators translate is not the word but a larger unit (Teubert andC ? erma? kova? , 2004). Although a single word may have many equivalents when translated, a word in context may well have only one such equivalent. For example, although travail as an individual word is sometimes translated as work and sometimes as labor, the phrase travaux pre?  paratoires is translated only as preparatory work. Thus, Teubert and C ? erma? kova? argue, travaux pre? paratoires and preparatory work may be considered to be equivalent translation units, whereas no such claim can be made for travaux and work. As well as giving information about languages, corpus studies have also indicated that translated language is not the same as nontranslated language. Studies of corpora of translated texts have shown that they tend to have higher incidences of very frequent words and that they tend to be more explicit in terms of grammar (Baker, 1993). They may also be influenced by the structure  of the source language, as was indicated in the discussion of wh- clefts in English and Swedish in the section ‘Languages and Varieties. ’ In communities where people read a large number of translated texts, the foreign language, via its translations, may even influence the home language. Gellerstam (1996) notes that some words in Swedish have taken on the meanings of English that look similar and argues that this is because translators tend to translate the English word with the similar looking Swedish word, thereby using the Swedish word with a new meaning, which then enters the language. One example is the Swedish word dramatisk, which used to indicate something relating to drama but which now, like the English word dramatic, also means ‘substantial and surprising. ’ Conclusion So every journey has its end. Ours isn’t an exception. It was a long journey but it was worth it. Corpus linguistics is a relatively new discipline, and a fast-changing one. As computer resources, particularly web-based ones, develop, sophisticated corpus investigations come within the reach of 5 / 6 the ordinary translator, language learner, or linguist. Our understanding of the ways that types of  language might vary from one another, and our appreciation of the ways that words pattern in language, have been immeasurably improved by corpus studies. Even more significant, perhaps, is the development of new theories of language that take corpus research as their starting point. The list of used literature 1. M. A. K. Halliday – Lexicology and Corpus Linguistics 2. Teubert and C ? erma? kova? 2004 3. Wallis, S. and Nelson G. ‘Knowledge discovery in grammatically analysed corpora’. Data Mining and Knowledge Discovery, 5: 307–340. 2001 POWERED BY TCPDF (WWW. TCPDF. ORG)

Monday, July 29, 2019

Management communication Essay Example | Topics and Well Written Essays - 500 words

Management communication - Essay Example However, in this situation, we have seen that there has been discrimination in the Dewey Ballantine community, particular against the Asian community, for which please consider this a sincere apology. We do realize that the Asian community seems to have been targeted during this scenario which is not the intended purpose of the various types of communication that have gone out from the partners. Rather, it has been a sincere effort only to realize some facts that the partners actually considered importance in terms of the rights of certain communities including animals. Despite that, the partners should have considered the type of message such communication should send across to certain community members before sending out that email pertaining to puppies. It was noted that the Asian community therefore was offended when this email was sent out because it seemed to completely counter the cultural elements of the Asians and thus we understand that this can create differences amongst employees within the organization. We also realize that this problem may actually persist and cannot be handled lightly. This understanding between communities and their cultural elements must be developed amongst each employee in order to avoid such problems in the future, thus Dewey Ballantine will not take a low profile in this case. Since this ideology of cultural differences can take a twist for the worse as well, it is important that instead of a tarnished reputation, Dewey Ballantine takes corrective as well as preventive steps in order to make sure this situation does not arise again. Therefore, the organization is looking into developing communication guidelines that can help avoid such inclusion of community based elements such that situations like these do not arise in the future. These communication guidelines will include all elements pertaining to culture that would need to be avoided so that

Sunday, July 28, 2019

Propaganda in the first and second world wars Research Paper

Propaganda in the first and second world wars - Research Paper Example Governments manage to design propaganda through lying, telling partial truth or exaggerating issues at hand. Governments use propaganda for various reasons during wars, but the bottom in the use of propaganda is to have a competitive advantage over their enemies and win the support of their citizens. Propaganda in the first and second world wars Introduction In the book, Propaganda and Persuasion, propaganda is defined as "a deliberate and systematic attempt that aims at shaping perceptions, manipulating cognition, as well as directing behavior with the ultimate aim of achieving a response, which portrays the intention of the propagandist" (Jowett & O'Donnell, 2011). The main aim in the use of propaganda is make the respondent to act, agree or get along and assist in adopting certain policies. The use of propaganda in times war can be dated back to 1622 when Pope Gregory XV applied this technique to calm religious wars in Alsace, Bohemia, and Palatinate. Therefore, the use of propaga nda appeared as the only solution, which would fight down effects of Protestant reformation (Finch, 2000). After the successful use of propaganda during the reign of Pope Gregory XV, propaganda later gained popularity in wars experienced in the nineteenth century. In the first and second world wars, the main practitioners of propaganda were the American and British governments. An American political scientist (Harold Lasswell) published a book that strongly supported the use of propaganda by American despite America’s denial in the use of this technique. Lasswell and his fellow political scientists gave a clear documentation on propaganda, which was even used by the Germans in the 1930s to acquit themselves with skills on the use of propaganda (Finch, 2000). Lasswell's publication pointed out that the application of propaganda during war times was "neither ominous nor insidious." The publication further pointed out that propaganda had become part and parcel of weapons used du ring wars, and it would remain as a component of wars forever. Lasswell referred propaganda as an act that encompasses the managing attitudes and opinions by directly altering social suggestion, as opposed to changing other conditions either in the environment or in the organism (Finch, 2000). The Americans and the Britons hesitated in accepting the use of propaganda as a legitimate tool in the first and second world wars. However, a British journalist by the name Beatrice Leeds pointed out that propaganda became acceptable the moment Russia got into war with Germany. The governments allied to Russia accepted that the use of propaganda would serve a fabulous deal in fighting the Germans (Marquis, 2009). One notable thing in democratic nations was the dismantling of departments of information. This was due to the perception that information/mass media played a significant role in the spread of propaganda. However, in America, the case was different due to the introduction of an Act t hat supported the introduction of a propaganda radio network. This network was the "Voice of America", which was assigned the responsibility of transmitting pro-American, democratic opinions across the world without mentioning propaganda. After the First World War, America, Britain, Germany, and the Soviet Union became serious debaters of the impacts of influencing their citizens' opinions through propaganda. In Germany, numerous research laboratories were set to study the

Saturday, July 27, 2019

Marketing case study Essay Example | Topics and Well Written Essays - 250 words

Marketing case study - Essay Example Since customers’ feedback is essential, there will be a continuation of the administration of the online surveys to maintain the tracking of opinions. In case the surveys suggest a change in the products or activities, eBay will modify the preferable factors within an apt range. In some locations, the surveys will be delivered to the store representatives for distribution (Kurtz, 2011). The representatives may also be motivated through offering them smaller reference books and diaries with the organization’s logo. The trend outlined above manifests a constant progress in sales and revenue of the eBay Inc. In the analytical perspective, the results show that the strategies put in place in the past five years are useful. In 2009, policies put in place led to a slight increase in the curve until 2010. After the 2010, the curve became stable showing consistent, sharp rise from 9.15 to 16.05 billion. Therefore, the corporate is doing well with the strategies put in place and their implementation as well (Kurtz,

Friday, July 26, 2019

Write a critical reivew of Five minds of a manager of Henry Mintzberg Annotated Bibliography

Write a critical reivew of Five minds of a manager of Henry Mintzberg and 3 other peer reviewed articles - Annotated Bibliography Example What is critical about this article is the assumption that the authors have actually attempted to generalize the different organizing principles applied by the managers. Though authors have focused upon managing self, organization, context, relationships and change within an organization however, for a manager to master all the traits at one time could be difficult task. As authors suggested that the managers must have to bring all mindsets to work together it is therefore relatively difficult for the managers to assume all the roles and perform them at their best. Managers may have to make a trade-off between certain mindsets as their actions must be based upon what is exactly in the best interest of the organization by taking into account the cost benefit analysis of their decisions and actions. Authors argue that all five mindsets must be weaved together to achieve the balance however this balancing act may not be possible to achieve. This article discussed about the mindset required to actually mentor the employees and help them to grow. The author has actually outlined that in order to properly groom the successors, organizations actually let the time pass on and fail to actually groom the employees and successors. The approach taken by the managers may not be suitable enough to actually allow successors to develop more maturity to assume the positions of responsibility in future. Author therefore has argued that to properly mentor the employees for the next level in their career, it is important that mentors must assume a special mindset which can foster such mentoring within the organization. This mindset requires slow, subtle and forgiving mindset which can actually allow managers to accommodate the mistakes of the followers and actually help them to correct their mistakes. This article is limited in the sense that it presents just one side of the argument and provides

Thursday, July 25, 2019

Communication Personal Statement Example | Topics and Well Written Essays - 750 words

Communication - Personal Statement Example It is believed for the most part that the process of selective perception is a psychological process and therefore a process that is not done consciously. The best analogy of this process is when a person says "You only hearing what you want to hear". As absurd as this sounds, the example stated is in fact exactly what happens in selective perception. This is not to be interpreted as a bad thing, but rather a byproduct of a society that is multitasking consistently. Each person has their own list of priorities and simply because two people may have an issue in common does not mean that the issue takes the same place on that list of priorities that we keep subconsciously. Because we are consistently bombarded with too much stimuli every day to pay equal attention to everything, we pick and choose according to our own needs. In completing our assignments, I noticed that not only are most people guilty of selective perception but I am as well. Although my intentions were good, the fact was that I did not get the message that the other person attempted to convey and I found this issue to be a part of my life in my educational pursuits and my job. Selective Exposure is the tendency to avoid information inconsistent with one's beliefs & attitudes. This, to me is somewhat akin to the theory supplied by selective perception, yet on a more conscious level. For example, as I noted in my essay, I would deliberately avoid people that cursed incessantly because the whole notion of cursing is an unnecessary coping skill and is not a necessary one. I don't see why I could expose myself to those who I consider to be "serial cursers" and as a result, they are consciously excluded from my social circles. I learned from this experience that selective exposure will ultimately retard my growth in my personal relationships and for my professional relationships. In employing selective exposure, I learned that the message I am really conveying is "I don't care what you think or say" and "Whatever you have to say is not worth my time to listen to." It makes me appear to be extremely close minded and ultimately stops others from wanting to commun icate with me. It would be remiss of me to not convey the fact that the Johari Window presented a bit of a challenge to me. While I can certainly understand the theory behind the category, I do find it rather difficult to apply to my own business relationships. While I am aware of the fact that trust is an essential component of business team relationships, there is still a competitive piece to it and that cannot be ignored, especially in this economy. With pink slips being sent out on a daily basis I cannot imagine that it will be easy to accept members of a team as anything other than competition. Workers in every field are trying to demonstrate that they are better than their co-workers so I cannot imagine that team spirit is really existing at this juncture. Thus while the Johari window is one that would work in the perfect economy, I don't believe that it would work in today's dire economic times. I think that one of the more difficult tasks is honest reflection of self perception. People decide on their own attitudes and feelings from watching themselves behave in various situations. This is particularly

Superpave Binder Specifications Essay Example | Topics and Well Written Essays - 750 words

Superpave Binder Specifications - Essay Example The distresses include fatigue, thermal cracking, and rutting. Tests related to performance were essentially applied to address the three distresses. The distresses are attributed to climate changes. Another specification is grade selection. The grade selection specification entails the determination of temperature extremes for the performance of the pavement. Typically, pavement performs under a certain range of temperatures (Texas Department of Transportation). The grade can be established by indicating the low or high temperatures for pavement performance. Distress and tests form another specification in which the binder and pavement life become predictable when the pavement lasts long enough. Testing for compliance is important to establish the PG binder grade. This is done through classification and verification for unknown and known PG grade respectively (Texas Department of Transportation). The proper binder grade for Bowling Green, Kentucky area is selected based on the Super pave aggregate requirements. The pavements have to satisfy the compaction requirements, which are unusual. The binder grade needs to have at least 92 percent of solid density (The American Association of State Highway and Transportation Officials). This would be due to the unusual compaction requirements that would be applied (The American Association of State Highway and Transportation Officials). Segregation may be used to describe various phenomena, but it is typically short of homogeneity in the constituents of hot mix asphalt for the in-place mat. It has such a magnitude to an extent that there is an expectation of pavement distresses that are highly accelerated. The segregation of HMA pavements is a major problem because it results in poor performances in many pavements (Cross and Brown, 1).

Wednesday, July 24, 2019

Information System Development Blog Essay Example | Topics and Well Written Essays - 500 words

Information System Development Blog - Essay Example The commands available are compiled together in the menu, while actions are performed later. Windowing system deals with software devices like graphics hardware and pointing devices, besides cursor positioning. In a personal computer, these elements are all modeled via a desktop metaphor in order to produce desktop environment simulation where the displays represent a desktop where documents and document folders can be placed. Window managers combine with other software so as to simulate desktop environment with various degrees of realism. The process that takes place in user interface components is, the message is first relayed on the physical component, perceived by perceptual components and then conceived by conceptual components. Though the three components have different functions, their functions are related. Components of user interface that have three dimensions, especially 6hose designed for graphics are common in movies and literature. They are also used in art, computer games and computer aided design. They are important in considering the interaction design because they enhance efficiency and make it easy to use the underlying logical design of stored programs, that is, usability (Marcin, 2009). The user typically interacts with information through manipulation of visual widgets, which allow for appropriate interactions to the held data. Together, they support the necessary actions in order to achieve the objectives of the user. A model view controller ensures a flexible structure, independent from the interface but indirectly linked to functional applications or easy customization. This allows users to design and select different skins at their own will. User-centered design methods make sure that the introduced visual image in the design is perfectly tailored to the duties it must perform. Larger widgets like windows normally provide a frame for the content of the main presentation like web page or email message. On the other

Tuesday, July 23, 2019

Incidnets In The Life of a Slave Girl by Harriet Jacobs Essay

Incidnets In The Life of a Slave Girl by Harriet Jacobs - Essay Example Due to the efforts of Jacobs’ autobiographer Yellin and the discovery of Jacobs’ letters with many abolitionists, the authenticity was established. Harriet Jacobs was not a proficient writer indeed. However, she had a story to tell and she worked at developing writing skills. In already 1858 she finished the manuscript of the book which was further proofread by L. Maria Child and published. The first sentence of the narrative makes us aware that the story is autobiographic. The personal story of the author served the basis of Incidents in the Life of a Slave Girl. The Jacobs’s autobiographer Yellin confirmed that events of the Incidents by Linda Brent coincided with the key events of Jacobs’ life - the suggestion earlier voiced by Amy Post. The facts of life of the main character and the author are identical and one can easily track them. The similarities of Linda’s early childhood in the Incidents and Harriet Jacobs’s childhood are the death of the mother which makes her aware of the slave status, then the death of the mistress who cared for her, her purchase by the mistress’ sister for five-year-old daughter, the death of the father, etc. Later when Linda Brent’s mistress was married to Dr. Flint (Dr. James Norcom in real life), Linda was haunted by him. She desperately tried to escape Dr. Flint and entered intimate relations with Samuel Tredwell Sawyer (Mr. Sands in the narrative) and bore two children for him - Joseph and Louisa Matilda (Ben and Ellen in the narrative). The other vivid biographic feature depicted in the story is 7 years ‘imprisonment’ of Linda in her grandmother’s attic to avoid abuse of Dr. Flint. Incidents in the Life of a Slave Girl is fundamental work which changed the traditional view about the slave narrative which had been primarily written by male authors. This shift allowed emphasizing issues of family, womanhood and sexuality in a different light. The standards of womanhood which

Monday, July 22, 2019

The Time Trap Essay Example for Free

The Time Trap Essay Abstract Timeliness is an important and significant factor of production. Timeliness refers to efficient consciousness of time as resource to achieve desired result within specified period of time. It has widely been mis-conceived. This challenges its efficiency and effectiveness. An increased education operationalizes the relevance of timeliness and therefore enhances productivity and its wise utilization. Careful consideration of time management increases the benefits of an organization.   Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚   Time is an important resource available to all at equal measures. The wealthier and the poor alike do have, at their disposal, 24 hours each a day. The concern of the world is how mankind uses or account for every second on a daily basis. Timeliness is referred by academician as a deliberate act of consciousness and wise use of time. It can also be defined as working to achieve set objectives within the stipulated framework of a specified time. Timeliness is therefore a measure of efficiency in production. Organizations, especially government institutions, are victims of mismanagement of time. According to them timeliness is not a measure of production and it is always never too late to carry out government task.   Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚   The above action draws its foundations on a number of misconceived ideas about time management. One of the misconceptions, is attached success to timeliness, People believe that if they are an achievers, then they are using their time well. This in most cases is misleading as much time is always used to achieve the success claimed. Some persons also practice reluctance most of the time, claiming that they work best under pressure. This is completely unacceptable. It encourages laziness and compromises the quality of output. In other instances, time management has been seen as a way of limiting individuals and therefore depriving them of their freedom to have fun with their friends.   Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚   There are diverse aspects that determine individuals’ response towards timeliness. It is widely believed that seniority at corporate level greatly influence the juniors consciousness to timeliness. For instance, when a senior books an appointment with one of the juniors, within the organization, the later will always attempt punctuality. While the former, at times hardly, appreciates the weight of the appointment. Interaction increases cohesion among individuals thereby resulting in different forms of relationships. It has been proven that workers timely output will proportionately measure with the level of interaction. This is because of personal respect and dignity they place on their relationship and may not wish to sour it by lagging behind the time stipulated for production. Another factor that influences the level of timeliness is reward. Where participants’ efforts are appreciated financially, timeliness is considered to some extent.   Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚   In the world, there are powerful human forces of nature that exert a magnetic pull on individuals towards mismanagement of time. Mankind, despite of, his busy work schedule, have fallen as a victim of unplanned visit. Many at times exercise avoidance strategy to concentrate on their work. This breaks the social links. Sacrificing social network is never easy consequently; persons give to the pressure of unplanned visit. Time management in European and United State of America is to some extent admirable. Individuals are programmed for the day.   However in Africa, activities catch up with them and then they carry it out. This approach, coupled with doing many things at a time and maintaining ego’s desire to please others result to unethical and unsustainable use of time.   Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚   Another factor that also perpetuates unwise use of time is fear of offending others. Being in the world where love is measured by how much time you are willing to share with your partner, individual control on time management is, in most cases is compromised. Cultural attachment and values put emphasis on family structures and relationships. Thus community functions, which can not be pre-determined, supercedes individuals plans. Absences of community calendar for its activities there by enhance time mismanagement.   Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚   Sensitivity to time management is important and paramount. Insensitivity results in huge economic losses. Mackenzie states that, â€Å"Time which one seemed free and elastic has grown elusive and in tight and our measure of its worth is changing dramatically.† It further, highlights that,† In Florida a man bills his doctor for $90 for keeping him waiting. In New York a woman pays someone $20 an hour to do her shopping.† (Mackenzie, 1997, p.14) With reference to these statistics, lack of consciousness to time management will cripple the continents economy. The government can not afford to pay for economic losses arising from time wastage. This makes timeliness an important pillar in economic development.   Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚   Time is an invisible, unique and finite precious resource and also an important factor of production. It always appears to exist in plenty, yet time spent will never come back. This is why time management is instrumental to ensure that the world achieves the maximum potential of accessible time. This therefore is one of the factors upon which economic and socio-cultural development are based.   Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚   Improved consciousness to management of time translates to increased production. It has been reported that in Canada Airline, an exemplary and remarkable productivity increase occurred in the management offices as a result of wise use of time. This also promotes and fosters efficient use and conservation of energy. . (Mackenzie,   Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚   Comfort and convenience is the desire for every human being. Keen consideration of time management ensures that the health of individuals is not compromised. Scientist, explain that human concentration levels declines with increased in time use. It is always very disheartening and exhaustive to attend a meeting that was scheduled to begin at eight yet it commences at ten in the morning. The danger is that the agendas never get adequate time to be discussed. With improved means of time management, issues will be subjected to the attention they deserve.   Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚   Redress for the challenges of efficient time management is timely and needs immediate adoption.†Where there is no vision people perish. â€Å"(Common phrase in many communities.)   People who work without a goal to achieve in live are in most cases frustrated and at the long run achieve less than their expectations. Timeliness focuses our lives. Like the salesmen who set their targets daily, every individual in the world should therefore take initiative to manage time well.   Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚   A schedule that document daily, weekly, monthly and if possible yearly programs and activities should be emphasized. This will set monitory indicators that help one assess the achievements and failures. Organizations such as NGOs, CBOs, Private Sectors and government should avoid unnecessary meeting with no serious agenda to discuss. They should also adopt a performance contract with their employees. To enlighten and expose individuals to impacts of lack of consciousness to timeliness, there is need for organizations to hold frequent seminars and workshops on time utilization.   Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚   In conclusion, timeliness conscious is very important in economic and sociological development in the world. It is therefore, a responsibility for all, both government and individuals to embrace this concept. References Mackenzie Alec (1997).The Time Trap; The classic book o Time management 3 Ed,   Ã‚  Ã‚  Ã‚   AMACOM Div America Mngt Assn

Sunday, July 21, 2019

Impacts on Agency Cost Theory

Impacts on Agency Cost Theory The main purpose of this research is to investigate how the determinants of the capital structure (leverage) and the dividend payout policy impact the agency cost theory. Literature review part picked up the relevant material related to agency theory, leverage, and dividends payout policy. The literature review section goes through the agency cost literature, and explores the financial policies; the capital structure (leverage), and the dividend payout policy and that these policies would influence the agency cost theory. 2.1 Agency theory Literature The notion of the agency theory is widely used in economics, finance, marketing, legal, and social sciences; Jensen and Meckling (1976) initiated and developed it. Capital structure (leverage) for the firms is determined by agency costs, i.e., costs related to conflict of interests between various groups including managers, which have claims on the firm’s resources (Harris and Raviv, 1991). Jensen and Meckling (1976) defined the agency relationship as â€Å"a contract under which one or more persons (the principal) engage another person (the agent), to perform some service on their behalf which involves delegating some decision making authority to the agent† pp.308. Assuming that both parties utility maximizes, the agents are not possible to act in the best interest of the principal. Furthermore, Jensen and Meckling (1976) contended that the principal can limit divergences from his interest by establishing appropriate incentives for the agent, and by incurring monitoring costs (pecuniary and non pecuniary), which are designed to limit the aberrant activities of the agent. Jensen and Meckling (1976) argued that the agency costs are unavoidable, since the agency costs are borne entirely by the owner. Jensen and Meckling (1976) contended that the owner is motivated to see these costs minimized. Authors who initiated and developed the agency theory have argued that if the owner manages a wholly owned firm, then he can make operating decisions that maximise his utility. The agency costs are generated if the owner manager sells equity claims on the firms, which are identical to his.  It also generated by the divergence between his interest and those of the outside shareholders, since he then bears only a fraction of the costs of any non-pecuniary benefits he takes out maximizing his own utility (Jensen and Meckling, 1976). Jensen and Meckling (1976) suggested two types of conflicts in the firm; First of all, the conflict between shareholders and managers arises because managers hold less than a hundred percent of the residual claim. Therefore, they do not capture the entire gain from their profit enhancement activities, but they do bear the entire cost of these activities. For example, managers can invest less effort in managing firm resources and may be able to transfer firm resources to their own, personal benefit, i.e., by consuming â€Å"perquisites† such as a fringe benefits. The manager bears the entire cost of refraining from these activities but captures only a fraction of the gain. As a result, managers over indulge in these interests relative to the level that would maximize the firm value. This inefficiency reduced the large fraction of the equity owned by the manager. Holding constant the manager’s absolute investment in the firm, increases in the fraction of the firm financed by debt increases the manager’s share of the equity and mitigates the loss from conflict between the managers and shareholders. Furthermore, as pointed out by Jensen (1986), since debt commits the firm to pay out cash, it reduces the amount of free cash flow available to managers to engage in these types of interests.  As a result, this reduction of the conflict between managers and shareholders will constitute the benefit of debt financing. Second, they also suggested that the conflict between debt holders and shareholders arises because the debt contract, gives shareholders an incentive to invest sub optimally. Especially when the debt contract provides that, if an investment yields large returns, well above the face value of the debt, shareholders capture most of the gain. However, if the investment fails, debt holders bear the consequences. Therefore, shareholders may benefit from investing in very risky projects, even if they are under valued; such investments result in an adverse in the value of debt. Lasfer (1995) argued that debt exacerbates the conflict between debt holders and shareholders. Shareholders will benefit from investments in risky projects at the expense of debt holders.  If the investment yields higher return than the face value of debt, shareholders capture most of the gain, however, if the investment fails, debt holders lose, given that. Therefore, shareholders protected by the limited liability. On the other hand, if the benefits captured by debt holders reduce the returns to shareholders, then an incentive to reject positive net present projects has created. Thus, the debt contract gives shareholders incentives to invest sub optimally. In addition, Myers (1977) argued that the firms with many growth opportunities should not be financed by debt, to reduce the negative net value projects.   Furthermore, some of arguments have been debated that the magnitude of the agency costs varies among firms. It will depend on the tastes of managers, the ease with which they can exercise their own preferences as opposed to value maximization in decision making, and the costs of monitoring and bonding activities. Therefore, the agency costs depend upon the cost of measuring the manager’s performance and evaluating it (Jensen and Meckling, 1976). (Jensen, 1986) either points out that when firms make their financing decision, they evaluate the advantages that may arise from the resolution of the conflicts between managers, shareholders and from long run tax shields.   In addition, Lasfer (1995) argues that debt finance creates a motivation for managers to work harder and make better investment decisions. On the other hand, debt works as a disciplining tool, because default allows creditors the option to force the firm into liquidation. Debt also generates information that can be used by investors to evaluate major operating decisions including liquidation (Harris and Raviv, 1990). Jensen (1986) debated that when using debt without retention of the proceeds of the issue, bonds the managers to meet their promise to pay future cash flows to the debt holders. Thus, debt can be an effective substitute for dividends. By issuing debt in exchange for stock, managers are bonding their promise to pay out future cash flows in a way that cannot be accomplished by simple dividend increases. Consequently, managers give recipients of the debt the right to take the firm to the bankruptcy court if they do not maintain their commitment to make the interest and principle payments. Thus, debt reduces the agency costs of free cash flow by reducing the cash flow available for spending at the discretion of managers. Jensen (1986) claimed that these control effects of debt are a potential determinant of capital structure. In practice, it is possible to reduce the owner manager non pecuniary benefits; by using these instruments external auditing, formal control systems, budget restrictions, and the establishment of incentive compensation systems serve to identify the manager’s interests more closely with those of the outside shareholders (Jensen and Meckling, 1976). Jensen (1986) suggested that leverage and dividend may act as a substitute mechanism to reduce the agency costs. Agency cost models predict that dividend payments can reduce the problems related to information asymmetry. Dividend payments might be consider also as a mechanism to reduce cash flow under management control, and help to mitigate the agency problems (Rozeff, 1982, and Easterbrook, 1984). Therefore, paying dividends may have a positive impact on the firms value. â€Å"Agency theory posits that the dividend mechanism provides an incentive for managers to reduce the costs related to the principal agent relationship, one way to reduce agency costs is to increase dividends† Baker and Powell (1999). They also claim that firm use the dividends use as a tool to monitor the management performance. Moreover, Easterbrook (1984) and Jensen (1986) argue that agency costs exist in firms because managers may not always want to maximize shareholder’s wealth due to the separation of ownership and control. Jensen (1986) addresses the free cash flow theory, in terms of this theory the conflict of interest between managers and stockholders is rooted in the presence of informational and self interest behavior. He defines the free cash flow as â€Å"cash flow in excess of that required to fund all projects that have positive net present value when discounted at the relevant cost of capital† (Jensen,1986). Within the context of the free cash flow hypothesis, firms prefer to increase their dividends and distribute the excess free cash flow in order to reduce agency costs. Consequently, markets react positively to this type of information. This theory is attractive because it is consistent with the evidence about investment and financing decisions (Jensen, 1986, Frankfurter and Wood, 2002). 2.2 Leverage Literature This section reviews the determinants of capital structure by different relevant literatures. Titman and Wessels (1988) study is considered to be one of the leading studies in the developed markets. They tried to extend the empirical work in capital structure theory by examining a much broader set of capital structure theories, and to analyze measures of short term, long term, and convertible debt. The data covers the US industrial companies from 1974 to 1982, and they used a factor analytic approach for estimating the impact of unobservable attributes on the choice of corporate debt ratios. As a result, the study confirms these factors, collateral values of assets, non-debt tax shields, growth, and uniqueness of the business, industry classification, firm size, and firm profitability. They also found that there is a negative relationship between debt levels and the uniqueness of the business. In addition, short term debt ratios have a negative relationship to firm size. However, they do not provide support for the effect on debt ratios arising from non debt tax shields, volatility, collateral value of assets, and growth. In Jordan, Al-Khouri and Hmedat (1992) aimed to find the effect of the earnings variability on capital structure of Jordanian corporations from the period from 1980 to 1988. They included 65 firms. The study used a multivariate regression approach with financial leverage as the dependent variable measured in three ways; first, long term debt over total assets, secondly, short term debt over total assets, and finally, short term debt plus long term debt over total assets. The standard deviation of the earnings variability and the size of the firm measured as independent variables. They concluded that the firm size is considered as a significant factor in determining the capital structure of the firm, and insignificant relationship between the earning variability and financial leverage of the firm. Furthermore, they suggest that the type of industry is not considered as a significant factor in determining the capital structure of the firm. Rajan and Zingales (1995) provided international evidence about the determinants of capital structure. They examined the capital structure in other countries related to factors similar to those that influence United States firms. The database contains 2583 companies in the G7 countries. They used regression analysis with the firm’s leverage (total debt divided by total debt plus total equity) as the dependent variable. Tangible assets, market to book ratio, firm size, and firm profitability used as independent variables. They found that in market bases firms with a lot of fixed assets are not highly levered, however, they supported that a positive relationship exists between tangible assets, and firms size, and capital structure (leverage). On the contrary, they confirmed that there is a negative relationship between leverage and the market to book ratio, and profitability. From the capital structure literature, Ozkan (2001) also investigated that the determinants of the target capital structure of firms and the role of the adjustment process in the UK using a sample of 390 firms. The multiple regression approach (panel data) was used to measure the debts by total debt to total assets, on the one hand. He also used in his model, non debt tax shield, firm size, liquidity, firm profitability, and firm growth as an independent variables. He confirmed that the profit, liquidity, non debt tax shield, and growth opportunities have a negative relationship to capital structure (leverage). Finally, he supported that there is a positive effect arising from size of firms on leverage. The study provided evidence that the UK firms have long term target leverage ratios and that they adjust quickly to their target ratios. The study by Booth et al. (2001) is considered as a one of the leading studies in the developing countries. It aimed to assess whether capital structure theory is applicable across developing countries with different institutional structures. The data include balance sheets and income statements for the largest companies in each selected country from the year 1980 to 1990. It included 10 developing countries: India, Pakistan, Thailand, Malaysia, Zimbabwe, Mexico, Brazil, Turkey, Jordan, and Korea. The study used multivariate regression analysis with dependent variables; total debt ratio, long term book debt ratio, and long term market to debt ratio. The independent variables are; average tax rate, tangibility, business risk, firm size, firm profitability, and market to book ratio. Booth et al. found that the more profitable the firm the lower the debt ratio, regardless how the debt ratio is defined. In addition, the higher the tangible assets mix, the higher is the long term debt ratio but the smaller is the total debt ratio. Finally, it concluded that debt ratios in developing countries seem to be affected in the same way by the same set of variables that are significant in developed countries. Voulgaris et al. (2004) investigated the determinants of capital structure for Greek manufacturing firms. The study used panel data of two random samples one for small and medium sized enterprises (SMEs) including 143 firms and another for large sized enterprises (LSEs) including 75 firms for the period from 1988 to 1996. It used a leverage model as a dependent variable (short run debt ratio, long run debt ratio, and total debt ratio). On the other hand, It used firm size, asset structure, profitability, growth rate, stock level, and receivables as independent variables. The study suggested that there are similarities and differences in the determinants of capital structure among the two samples. The similarities include that the firm size and growth opportunities positively related to leverage. While, they confirm that the profitability has a negative relationship to leverage. Moreover, they pointed out the differences that the inventory period, and account receivables collection period have been found as determinants of debt in SMEs but not in LSEs. Liquidity doest not affect LSEs leverage, but it affects the SMEs. Finally, they also suggested that there is a positive relationship between profit margins and short term debt ratio only for SMEs. Voulgaris et al. (2004) have debated this arguments as; ‘‘the attitude of banks toward small sized firms should be changed so they provide easier access to long-term debt financing’’. In addition, â€Å"enactment of rules that will allow transparency of operations in the Greek stock market and a healthier development of the newly established capital market for SMEs will assist Greek firms into achieving a stronger capital structure’’. 2.3 Dividends payout ratio literature Dividend payout ratios vary between firms and the dividend payout policy will impact the agency cost theory. Rozeff (1982) investigated in his study that the dividends policy will be rationalize by appealing the transaction cost and agency cost associated with external finance. Moreover, Rozeff (1982) had found evidences supporting how the agency costs influence the dividends payout ratio. He found that the firms have distributed lower dividend payout ratios when they have a higher revenue growth, because this growth leads to higher investment expenditures.  This evidence supports the view of the investment policy affect on the dividend policy; the reason for that influences is that would the external finance be costly. Conversely, he found that the firms have distributed higher dividends payouts when insiders hold a lower portion of the equity and (or) a greater numbers of shareholders own the outside equity. Rozeff (1982) pointed out that this evidence supports that the dividend payments are part of the firm’s optimum monitoring and that bonding package reduces the agency costs. Moreover, if the agency cost declines when the dividend payout does and if the transaction cost of external finance increases when the dividend payout is increased as well, then minimization of these costs will lead to a unique optimum for a given firm. In addition, Hansen, Kumar, and Shome [HKS], (1994) pointed out the relevance of the monitoring theory for explaining the dividends policy of regulated electric utilities. From an agency cost perspective, they emphasized their ideas that the dividends promote monitoring of what they call the shareholders regulator conflict. Therefore, it is a monitoring role of dividends. On the contrary, Easterbrook’s (1984) has noted that the dividends monitoring of the shareholders managers conflict. They also have observed that the utilities firms have a discipline of monitoring mechanism for controlling agency cost, depending on the relative cost effectiveness of those costs (Crutchly and Hensen, 1989). The regulator process will impact the conflict between the shareholders and mangers, by mitigate the managers’ power to appropriate shareholders’ wealth and consume perquisites (Hansen et al. 1994). On the other hand, they argued this issue by the cost-plus concept, regulators may set into motion of managerial incentive structure that potentially conflicts with shareholders interests, this concept solve the shareholders-regulators concept since the sources of the conflict lies in differences in the perceptions of what constitutes fair cost plus. Therefore, the regulation can control some of the agency cost while exacerbating others. In their study, they conduct also that the managers and shareholders of unregulated firms have a several mechanisms whether, internal or external, for controlling agency cost. In addition, they observed that the dividend policy to reduce the agency theory is not limited, depending on their findings they suggested that the cost of dividend payout policy might be below the costs paid by other types of firm. In fact the utilities company maintain high debt ratio that would maintain as well as equity agency costs. Aivazian et al. (2003b) compare the dividend policy behaviour of eight emerging markets with dividend policies in the US firms in the period from 1980 to 1990. The sample included firms from; Korea, Malaysia, Zimbabwe, India, Thailand, Turkey, Pakistan, and Jordan. They found that it is difficult to predict dividend changes for such emerging markets. This is because the quality of firms with reputations for cutting dividends is somehow similar to those who increase their dividends, than for the US control sample. In addition, current dividends are less sensitive to past dividends than for the US sample of firms. They also found that the Lintner model[1] does not work well for the sample of emerging markets. These results indicate that the institutional frameworks in these emerging markets make dividend policy a weak technique for signaling future earnings and reducing agency costs than for the US sample of firms. Furthermore, Omran and Pointon (2004) investigated the role of dividend policy in determining share prices, the determinants of payout ratios, and the factors that affect the stability of dividends for a sample of 94 Egyptian firms. They found that retentions are more important than dividends in firms with actively traded shares, but that accounting book value is more important than dividends and earnings for non-actively traded firms. However, when they combined both the actively traded and non-traded firms, they found that dividends are more important than earnings. In the determinants of payout ratios, they found that there is a negative relationship between the leverage ratio and market to book ratio, tangibility, and firm size on the one hand, to the payout ratios in actively traded firms. On the contrary, they also found that there is a positive relationship between the business risk, market to book and firm size (measured by total assets) to payout ratios in non-actively traded firms. Furthermore, for the whole sample, leverage has a positive relationship with payout ratios, while firm size (measured by market capitalization) is negatively related to payout ratios. Finally, the stepwise logistic regression analysis shows that decreasing dividends is associated with lack of liquidity and overall profitability. In addition, increasing dividends is associated with higher overall profitability. 2.4 Summary In this chapter the relevant literatures addressing the reviews of the agency cost theory related to the financial policies. It also gives a theoretical background on how the conflicts of interests arise between the agents (managers) and the principal (shareholders). The second and third sections present the determinants of leverage and dividend payout policy. The following chapter will go through the description of data, and data methodology was employed for this dissertation. 3. Methodology, Research Design and Data Description The aim of the current study is to investigate firstly, the empirical evidence of the determinants of leverage and dividend policy under the agency theory concept for the period 2002-2007. The majority of the previous studies in the field of capital structure have made in the context of developed countries such as USA and UK. It is important to investigate the main determinants of leverage and dividend policy in developing countries where, capital markets, are less developed, less competitive and suffering from the lack of compatible regulations and sufficient supervision This chapter will explain the research methodology of this study. This chapter also identifies the sample of the study. Moreover, it presents an illustration of the econometric techniques that have been employed. In addition, this chapter gives a brief explanation of the specification tests used in the study to identify which technique is the best for the data set. This chapter structured as follows; Section (3.1) presents data description.  Section (3.2) presents the sample of the study. Section (3.3) discusses the econometric techniques employed in the study. Finally, Section (3.4) provides a brief summary. 3.1 Data Description The data used in the study are secondary data for companies listed at Amman Stock Exchange (ASE) for the period of 2002-2007. The data was extracted from the firm’s annual reports, and from Amman Stock Exchange’s publications (The Yearly Companies Guide, and Amman Stock Exchange Monthly Statistical Bulletins). Data is readily available in the form of CD and on the website of the Amman Stock Exchange. The reason for the study period selection was to minimize the missing observations for the sample companies. Moreover, a different reporting system has been used since 2000. The application of the new reporting system was the result of the transparency act which was launched in 1999, and forced all companies listed in Amman Stock Exchange to disclose their financial information and publish their annual reports according to the International Financial Reporting Standards. In other words, this data series for the period from 2002-2007 was chosen in terms of consistency and comparability purposes. 3.2 Sample of the study The sample of the study consists of the Jordanian Manufacturing companies listed on the Amman Stock Exchange for the period of 2002-2007. The total number of the companies listed in ASE at the end of year 2007 was 215. Officially, these companies are divided into four main economic sectors; Banks sector, Insurance sector, Services sector and finally Industrial sector. Moreover, this study is concerned only with Jordanian manufacturing companies that their stocks are traded in the organized market. It is important to note that the capital structure of financial firms has special characteristic when compared to the capital structure of non financial firms, they also have special tax treatment (Lester, 1995). On the other hand, the financial firms have a higher leverage rate, which may tend to make the analysis results biased. Moreover, financial firms their leverage is affected by investor insurance schemes (Rajan and Zingals, 1995). For these reasons, the potential sample of the study consists of non financial (Manufacturing) companies that are still listed in Amman Stock Exchange. The total number of industrial companies listed in ASE at the end of year 2007 was 88 companies, which are 40.93% of the total number of the companies listed in that market. The study conducts the following criteria in selecting the sample upon the Jordanian manufacturing companies by excluding all the firms that was incorporated after year 2002, and all the firms that have merged or acquired during this period, further, the firms have liquidated or delisted by the Amman Stock Exchange, and finally, the study have also excluded the firms that have information missing for that period. The application for those criteria has resulted in 52 samples of manufacturing companies. The data for the variables that are included in the study models is tested using three different econometric techniques which will be discuss briefly in the next sections. 3.3 Econometrics techniques Hairs et al. (1998) argued that the application of econometrics technique depends on the nature of data employed in the study, and to what extent it would be realised to the research objectives. In order to find a best and adequate data model, the current study employs pooled data technique and panel data analysis which is usually estimated by either fixed effect technique or random effects technique.  The following sections provide a brief discussion on the econometrics techniques that the current study uses to estimate the empirical models. 3.3.1 Pooled Ordinary Least Square (OLS) technique All the models used in the study have been tested by the pooled data analysis technique. The pooled data is the data that contains pooling of time series and cross-sectional observations (combination of time series and cross-section data) (Gujarati, 2003). The pooled data analysis has many advantages over the pure time series or pure cross sectional data. It generates more informative data, more variability, less collinearity among variables, more degrees of freedom, and more efficiency (Gujarati, 2003). The underlying assumption behind the pooled analysis is that, the intercept value and the coefficients of all the explanatory variables are the same for all the firms, as well as they are constant over time (no specific time or individual aspects). It also assumes that the error term captures the differences between the firms (across-sectional units) over the time. However, (Gujarati, 2003) has pointed out that these assumptions are highly restrictive. He argues that although of it is simplicity and advantages, the pooled regression may distort the true picture of the relationship between the dependent and independent variables across the firms. Pooled model will be simply estimated by Ordinary Least Square (OLS). However, OLS will be appropriate if no individual (firm) or time specific effects exist. If they exist, the unobserved effects of unobserved individual and time specific factors on dependent variable can be accommodated by using one of the panel data techniques.   According to (Gujarati, 2003) panel data is a special form of pooled data in which the same cross-sectional unit is surveyed over time. It helps researchers to substantially minimize the problems that arise when there is an omitted variables problems such as time and individual-specific variables and to provide robust parameter estimates than time series and (or) cross sectional data. All the empirical models that have been tested by using pooled data analysis and tested again on the basis of panel data analysis techniques (Fixed Effects and Random Effects).   3.3.2 The fixed effects model (FEM) Fixed effects technique allows control for unobserved heterogeneity which describes individual specific effects not captured by observed variables. According to Gujarati (2003) the fixed effect model takes into account the specific effect of each firm â€Å"the individuality† by allowing the intercept vary across individuals (firms), but each individual’s intercept does not vary over time. However, it still assumes that the slope coefficients are constant across individuals or over time. Two methods used to control for the unobserved fixed effects within the fixed effects model; the first differences and Least Square Dummy variables (LSDV) methods.  For the purposes of the current study, (LSDV) was used where; two sets of dummy variables (industry, and year dummy variables). The additional dummy variables control for variables that are constant across firms but change over time. Therefore, the combine time and individual (firm) fixed effects model eliminates the omitted variables bias arising both from unobserved factors that are constant over time and unobserved factors that are constant across firms. However, fixed effects model consumes the degrees of freedom, if estimated by the Least Square Dummy Variable (LSDV) method and, too many dummy variables are introduced (Gujarati, 2003). Furthermore, with too many variables used as regressors in the models, there is the possibility of multicollinearity. It is worth noting that OLS technique used in estimating fixed effects model. 3.3.3 The Random Effects Model (REM) By contrast, fixed effects model, the unobserved effects in random effects model is captured by the error term (ÃŽ µit) consisting of an individual specific one (ui) and an overall component (vit) which is the combined time series and cross-section error. Moreover, it treats the intercept coefficient as a random variable with a mean value (ÃŽ ±0) of all cross-sectional (firms) intercepts and the error component represents the random deviation of individual intercept from this mean value (Gujarati, 2003). Consequently, the individual differences in the intercept values of each firm are reflected in the error term (ui). On the other hand, the Generalized Least Square (GLS) used in estimating random affects model.  This is because the GLS technique takes into account the different correlation structure of the error term in the Random Effect Model (REM) (Gujarati, 2003). 3.3.4 Statistical specification tests The study uses three specification tests to identify which empirical method is the best. These tests are used for testing the fixed effect model versus the pooled model (F-statistics), the random effect model versus pooled model (Lagrange Multiplier test) (LM), and the fixed effect model versus the random effect model (Hausman test). The following sub-sections offer brief disc Impacts on Agency Cost Theory Impacts on Agency Cost Theory The main purpose of this research is to investigate how the determinants of the capital structure (leverage) and the dividend payout policy impact the agency cost theory. Literature review part picked up the relevant material related to agency theory, leverage, and dividends payout policy. The literature review section goes through the agency cost literature, and explores the financial policies; the capital structure (leverage), and the dividend payout policy and that these policies would influence the agency cost theory. 2.1 Agency theory Literature The notion of the agency theory is widely used in economics, finance, marketing, legal, and social sciences; Jensen and Meckling (1976) initiated and developed it. Capital structure (leverage) for the firms is determined by agency costs, i.e., costs related to conflict of interests between various groups including managers, which have claims on the firm’s resources (Harris and Raviv, 1991). Jensen and Meckling (1976) defined the agency relationship as â€Å"a contract under which one or more persons (the principal) engage another person (the agent), to perform some service on their behalf which involves delegating some decision making authority to the agent† pp.308. Assuming that both parties utility maximizes, the agents are not possible to act in the best interest of the principal. Furthermore, Jensen and Meckling (1976) contended that the principal can limit divergences from his interest by establishing appropriate incentives for the agent, and by incurring monitoring costs (pecuniary and non pecuniary), which are designed to limit the aberrant activities of the agent. Jensen and Meckling (1976) argued that the agency costs are unavoidable, since the agency costs are borne entirely by the owner. Jensen and Meckling (1976) contended that the owner is motivated to see these costs minimized. Authors who initiated and developed the agency theory have argued that if the owner manages a wholly owned firm, then he can make operating decisions that maximise his utility. The agency costs are generated if the owner manager sells equity claims on the firms, which are identical to his.  It also generated by the divergence between his interest and those of the outside shareholders, since he then bears only a fraction of the costs of any non-pecuniary benefits he takes out maximizing his own utility (Jensen and Meckling, 1976). Jensen and Meckling (1976) suggested two types of conflicts in the firm; First of all, the conflict between shareholders and managers arises because managers hold less than a hundred percent of the residual claim. Therefore, they do not capture the entire gain from their profit enhancement activities, but they do bear the entire cost of these activities. For example, managers can invest less effort in managing firm resources and may be able to transfer firm resources to their own, personal benefit, i.e., by consuming â€Å"perquisites† such as a fringe benefits. The manager bears the entire cost of refraining from these activities but captures only a fraction of the gain. As a result, managers over indulge in these interests relative to the level that would maximize the firm value. This inefficiency reduced the large fraction of the equity owned by the manager. Holding constant the manager’s absolute investment in the firm, increases in the fraction of the firm financed by debt increases the manager’s share of the equity and mitigates the loss from conflict between the managers and shareholders. Furthermore, as pointed out by Jensen (1986), since debt commits the firm to pay out cash, it reduces the amount of free cash flow available to managers to engage in these types of interests.  As a result, this reduction of the conflict between managers and shareholders will constitute the benefit of debt financing. Second, they also suggested that the conflict between debt holders and shareholders arises because the debt contract, gives shareholders an incentive to invest sub optimally. Especially when the debt contract provides that, if an investment yields large returns, well above the face value of the debt, shareholders capture most of the gain. However, if the investment fails, debt holders bear the consequences. Therefore, shareholders may benefit from investing in very risky projects, even if they are under valued; such investments result in an adverse in the value of debt. Lasfer (1995) argued that debt exacerbates the conflict between debt holders and shareholders. Shareholders will benefit from investments in risky projects at the expense of debt holders.  If the investment yields higher return than the face value of debt, shareholders capture most of the gain, however, if the investment fails, debt holders lose, given that. Therefore, shareholders protected by the limited liability. On the other hand, if the benefits captured by debt holders reduce the returns to shareholders, then an incentive to reject positive net present projects has created. Thus, the debt contract gives shareholders incentives to invest sub optimally. In addition, Myers (1977) argued that the firms with many growth opportunities should not be financed by debt, to reduce the negative net value projects.   Furthermore, some of arguments have been debated that the magnitude of the agency costs varies among firms. It will depend on the tastes of managers, the ease with which they can exercise their own preferences as opposed to value maximization in decision making, and the costs of monitoring and bonding activities. Therefore, the agency costs depend upon the cost of measuring the manager’s performance and evaluating it (Jensen and Meckling, 1976). (Jensen, 1986) either points out that when firms make their financing decision, they evaluate the advantages that may arise from the resolution of the conflicts between managers, shareholders and from long run tax shields.   In addition, Lasfer (1995) argues that debt finance creates a motivation for managers to work harder and make better investment decisions. On the other hand, debt works as a disciplining tool, because default allows creditors the option to force the firm into liquidation. Debt also generates information that can be used by investors to evaluate major operating decisions including liquidation (Harris and Raviv, 1990). Jensen (1986) debated that when using debt without retention of the proceeds of the issue, bonds the managers to meet their promise to pay future cash flows to the debt holders. Thus, debt can be an effective substitute for dividends. By issuing debt in exchange for stock, managers are bonding their promise to pay out future cash flows in a way that cannot be accomplished by simple dividend increases. Consequently, managers give recipients of the debt the right to take the firm to the bankruptcy court if they do not maintain their commitment to make the interest and principle payments. Thus, debt reduces the agency costs of free cash flow by reducing the cash flow available for spending at the discretion of managers. Jensen (1986) claimed that these control effects of debt are a potential determinant of capital structure. In practice, it is possible to reduce the owner manager non pecuniary benefits; by using these instruments external auditing, formal control systems, budget restrictions, and the establishment of incentive compensation systems serve to identify the manager’s interests more closely with those of the outside shareholders (Jensen and Meckling, 1976). Jensen (1986) suggested that leverage and dividend may act as a substitute mechanism to reduce the agency costs. Agency cost models predict that dividend payments can reduce the problems related to information asymmetry. Dividend payments might be consider also as a mechanism to reduce cash flow under management control, and help to mitigate the agency problems (Rozeff, 1982, and Easterbrook, 1984). Therefore, paying dividends may have a positive impact on the firms value. â€Å"Agency theory posits that the dividend mechanism provides an incentive for managers to reduce the costs related to the principal agent relationship, one way to reduce agency costs is to increase dividends† Baker and Powell (1999). They also claim that firm use the dividends use as a tool to monitor the management performance. Moreover, Easterbrook (1984) and Jensen (1986) argue that agency costs exist in firms because managers may not always want to maximize shareholder’s wealth due to the separation of ownership and control. Jensen (1986) addresses the free cash flow theory, in terms of this theory the conflict of interest between managers and stockholders is rooted in the presence of informational and self interest behavior. He defines the free cash flow as â€Å"cash flow in excess of that required to fund all projects that have positive net present value when discounted at the relevant cost of capital† (Jensen,1986). Within the context of the free cash flow hypothesis, firms prefer to increase their dividends and distribute the excess free cash flow in order to reduce agency costs. Consequently, markets react positively to this type of information. This theory is attractive because it is consistent with the evidence about investment and financing decisions (Jensen, 1986, Frankfurter and Wood, 2002). 2.2 Leverage Literature This section reviews the determinants of capital structure by different relevant literatures. Titman and Wessels (1988) study is considered to be one of the leading studies in the developed markets. They tried to extend the empirical work in capital structure theory by examining a much broader set of capital structure theories, and to analyze measures of short term, long term, and convertible debt. The data covers the US industrial companies from 1974 to 1982, and they used a factor analytic approach for estimating the impact of unobservable attributes on the choice of corporate debt ratios. As a result, the study confirms these factors, collateral values of assets, non-debt tax shields, growth, and uniqueness of the business, industry classification, firm size, and firm profitability. They also found that there is a negative relationship between debt levels and the uniqueness of the business. In addition, short term debt ratios have a negative relationship to firm size. However, they do not provide support for the effect on debt ratios arising from non debt tax shields, volatility, collateral value of assets, and growth. In Jordan, Al-Khouri and Hmedat (1992) aimed to find the effect of the earnings variability on capital structure of Jordanian corporations from the period from 1980 to 1988. They included 65 firms. The study used a multivariate regression approach with financial leverage as the dependent variable measured in three ways; first, long term debt over total assets, secondly, short term debt over total assets, and finally, short term debt plus long term debt over total assets. The standard deviation of the earnings variability and the size of the firm measured as independent variables. They concluded that the firm size is considered as a significant factor in determining the capital structure of the firm, and insignificant relationship between the earning variability and financial leverage of the firm. Furthermore, they suggest that the type of industry is not considered as a significant factor in determining the capital structure of the firm. Rajan and Zingales (1995) provided international evidence about the determinants of capital structure. They examined the capital structure in other countries related to factors similar to those that influence United States firms. The database contains 2583 companies in the G7 countries. They used regression analysis with the firm’s leverage (total debt divided by total debt plus total equity) as the dependent variable. Tangible assets, market to book ratio, firm size, and firm profitability used as independent variables. They found that in market bases firms with a lot of fixed assets are not highly levered, however, they supported that a positive relationship exists between tangible assets, and firms size, and capital structure (leverage). On the contrary, they confirmed that there is a negative relationship between leverage and the market to book ratio, and profitability. From the capital structure literature, Ozkan (2001) also investigated that the determinants of the target capital structure of firms and the role of the adjustment process in the UK using a sample of 390 firms. The multiple regression approach (panel data) was used to measure the debts by total debt to total assets, on the one hand. He also used in his model, non debt tax shield, firm size, liquidity, firm profitability, and firm growth as an independent variables. He confirmed that the profit, liquidity, non debt tax shield, and growth opportunities have a negative relationship to capital structure (leverage). Finally, he supported that there is a positive effect arising from size of firms on leverage. The study provided evidence that the UK firms have long term target leverage ratios and that they adjust quickly to their target ratios. The study by Booth et al. (2001) is considered as a one of the leading studies in the developing countries. It aimed to assess whether capital structure theory is applicable across developing countries with different institutional structures. The data include balance sheets and income statements for the largest companies in each selected country from the year 1980 to 1990. It included 10 developing countries: India, Pakistan, Thailand, Malaysia, Zimbabwe, Mexico, Brazil, Turkey, Jordan, and Korea. The study used multivariate regression analysis with dependent variables; total debt ratio, long term book debt ratio, and long term market to debt ratio. The independent variables are; average tax rate, tangibility, business risk, firm size, firm profitability, and market to book ratio. Booth et al. found that the more profitable the firm the lower the debt ratio, regardless how the debt ratio is defined. In addition, the higher the tangible assets mix, the higher is the long term debt ratio but the smaller is the total debt ratio. Finally, it concluded that debt ratios in developing countries seem to be affected in the same way by the same set of variables that are significant in developed countries. Voulgaris et al. (2004) investigated the determinants of capital structure for Greek manufacturing firms. The study used panel data of two random samples one for small and medium sized enterprises (SMEs) including 143 firms and another for large sized enterprises (LSEs) including 75 firms for the period from 1988 to 1996. It used a leverage model as a dependent variable (short run debt ratio, long run debt ratio, and total debt ratio). On the other hand, It used firm size, asset structure, profitability, growth rate, stock level, and receivables as independent variables. The study suggested that there are similarities and differences in the determinants of capital structure among the two samples. The similarities include that the firm size and growth opportunities positively related to leverage. While, they confirm that the profitability has a negative relationship to leverage. Moreover, they pointed out the differences that the inventory period, and account receivables collection period have been found as determinants of debt in SMEs but not in LSEs. Liquidity doest not affect LSEs leverage, but it affects the SMEs. Finally, they also suggested that there is a positive relationship between profit margins and short term debt ratio only for SMEs. Voulgaris et al. (2004) have debated this arguments as; ‘‘the attitude of banks toward small sized firms should be changed so they provide easier access to long-term debt financing’’. In addition, â€Å"enactment of rules that will allow transparency of operations in the Greek stock market and a healthier development of the newly established capital market for SMEs will assist Greek firms into achieving a stronger capital structure’’. 2.3 Dividends payout ratio literature Dividend payout ratios vary between firms and the dividend payout policy will impact the agency cost theory. Rozeff (1982) investigated in his study that the dividends policy will be rationalize by appealing the transaction cost and agency cost associated with external finance. Moreover, Rozeff (1982) had found evidences supporting how the agency costs influence the dividends payout ratio. He found that the firms have distributed lower dividend payout ratios when they have a higher revenue growth, because this growth leads to higher investment expenditures.  This evidence supports the view of the investment policy affect on the dividend policy; the reason for that influences is that would the external finance be costly. Conversely, he found that the firms have distributed higher dividends payouts when insiders hold a lower portion of the equity and (or) a greater numbers of shareholders own the outside equity. Rozeff (1982) pointed out that this evidence supports that the dividend payments are part of the firm’s optimum monitoring and that bonding package reduces the agency costs. Moreover, if the agency cost declines when the dividend payout does and if the transaction cost of external finance increases when the dividend payout is increased as well, then minimization of these costs will lead to a unique optimum for a given firm. In addition, Hansen, Kumar, and Shome [HKS], (1994) pointed out the relevance of the monitoring theory for explaining the dividends policy of regulated electric utilities. From an agency cost perspective, they emphasized their ideas that the dividends promote monitoring of what they call the shareholders regulator conflict. Therefore, it is a monitoring role of dividends. On the contrary, Easterbrook’s (1984) has noted that the dividends monitoring of the shareholders managers conflict. They also have observed that the utilities firms have a discipline of monitoring mechanism for controlling agency cost, depending on the relative cost effectiveness of those costs (Crutchly and Hensen, 1989). The regulator process will impact the conflict between the shareholders and mangers, by mitigate the managers’ power to appropriate shareholders’ wealth and consume perquisites (Hansen et al. 1994). On the other hand, they argued this issue by the cost-plus concept, regulators may set into motion of managerial incentive structure that potentially conflicts with shareholders interests, this concept solve the shareholders-regulators concept since the sources of the conflict lies in differences in the perceptions of what constitutes fair cost plus. Therefore, the regulation can control some of the agency cost while exacerbating others. In their study, they conduct also that the managers and shareholders of unregulated firms have a several mechanisms whether, internal or external, for controlling agency cost. In addition, they observed that the dividend policy to reduce the agency theory is not limited, depending on their findings they suggested that the cost of dividend payout policy might be below the costs paid by other types of firm. In fact the utilities company maintain high debt ratio that would maintain as well as equity agency costs. Aivazian et al. (2003b) compare the dividend policy behaviour of eight emerging markets with dividend policies in the US firms in the period from 1980 to 1990. The sample included firms from; Korea, Malaysia, Zimbabwe, India, Thailand, Turkey, Pakistan, and Jordan. They found that it is difficult to predict dividend changes for such emerging markets. This is because the quality of firms with reputations for cutting dividends is somehow similar to those who increase their dividends, than for the US control sample. In addition, current dividends are less sensitive to past dividends than for the US sample of firms. They also found that the Lintner model[1] does not work well for the sample of emerging markets. These results indicate that the institutional frameworks in these emerging markets make dividend policy a weak technique for signaling future earnings and reducing agency costs than for the US sample of firms. Furthermore, Omran and Pointon (2004) investigated the role of dividend policy in determining share prices, the determinants of payout ratios, and the factors that affect the stability of dividends for a sample of 94 Egyptian firms. They found that retentions are more important than dividends in firms with actively traded shares, but that accounting book value is more important than dividends and earnings for non-actively traded firms. However, when they combined both the actively traded and non-traded firms, they found that dividends are more important than earnings. In the determinants of payout ratios, they found that there is a negative relationship between the leverage ratio and market to book ratio, tangibility, and firm size on the one hand, to the payout ratios in actively traded firms. On the contrary, they also found that there is a positive relationship between the business risk, market to book and firm size (measured by total assets) to payout ratios in non-actively traded firms. Furthermore, for the whole sample, leverage has a positive relationship with payout ratios, while firm size (measured by market capitalization) is negatively related to payout ratios. Finally, the stepwise logistic regression analysis shows that decreasing dividends is associated with lack of liquidity and overall profitability. In addition, increasing dividends is associated with higher overall profitability. 2.4 Summary In this chapter the relevant literatures addressing the reviews of the agency cost theory related to the financial policies. It also gives a theoretical background on how the conflicts of interests arise between the agents (managers) and the principal (shareholders). The second and third sections present the determinants of leverage and dividend payout policy. The following chapter will go through the description of data, and data methodology was employed for this dissertation. 3. Methodology, Research Design and Data Description The aim of the current study is to investigate firstly, the empirical evidence of the determinants of leverage and dividend policy under the agency theory concept for the period 2002-2007. The majority of the previous studies in the field of capital structure have made in the context of developed countries such as USA and UK. It is important to investigate the main determinants of leverage and dividend policy in developing countries where, capital markets, are less developed, less competitive and suffering from the lack of compatible regulations and sufficient supervision This chapter will explain the research methodology of this study. This chapter also identifies the sample of the study. Moreover, it presents an illustration of the econometric techniques that have been employed. In addition, this chapter gives a brief explanation of the specification tests used in the study to identify which technique is the best for the data set. This chapter structured as follows; Section (3.1) presents data description.  Section (3.2) presents the sample of the study. Section (3.3) discusses the econometric techniques employed in the study. Finally, Section (3.4) provides a brief summary. 3.1 Data Description The data used in the study are secondary data for companies listed at Amman Stock Exchange (ASE) for the period of 2002-2007. The data was extracted from the firm’s annual reports, and from Amman Stock Exchange’s publications (The Yearly Companies Guide, and Amman Stock Exchange Monthly Statistical Bulletins). Data is readily available in the form of CD and on the website of the Amman Stock Exchange. The reason for the study period selection was to minimize the missing observations for the sample companies. Moreover, a different reporting system has been used since 2000. The application of the new reporting system was the result of the transparency act which was launched in 1999, and forced all companies listed in Amman Stock Exchange to disclose their financial information and publish their annual reports according to the International Financial Reporting Standards. In other words, this data series for the period from 2002-2007 was chosen in terms of consistency and comparability purposes. 3.2 Sample of the study The sample of the study consists of the Jordanian Manufacturing companies listed on the Amman Stock Exchange for the period of 2002-2007. The total number of the companies listed in ASE at the end of year 2007 was 215. Officially, these companies are divided into four main economic sectors; Banks sector, Insurance sector, Services sector and finally Industrial sector. Moreover, this study is concerned only with Jordanian manufacturing companies that their stocks are traded in the organized market. It is important to note that the capital structure of financial firms has special characteristic when compared to the capital structure of non financial firms, they also have special tax treatment (Lester, 1995). On the other hand, the financial firms have a higher leverage rate, which may tend to make the analysis results biased. Moreover, financial firms their leverage is affected by investor insurance schemes (Rajan and Zingals, 1995). For these reasons, the potential sample of the study consists of non financial (Manufacturing) companies that are still listed in Amman Stock Exchange. The total number of industrial companies listed in ASE at the end of year 2007 was 88 companies, which are 40.93% of the total number of the companies listed in that market. The study conducts the following criteria in selecting the sample upon the Jordanian manufacturing companies by excluding all the firms that was incorporated after year 2002, and all the firms that have merged or acquired during this period, further, the firms have liquidated or delisted by the Amman Stock Exchange, and finally, the study have also excluded the firms that have information missing for that period. The application for those criteria has resulted in 52 samples of manufacturing companies. The data for the variables that are included in the study models is tested using three different econometric techniques which will be discuss briefly in the next sections. 3.3 Econometrics techniques Hairs et al. (1998) argued that the application of econometrics technique depends on the nature of data employed in the study, and to what extent it would be realised to the research objectives. In order to find a best and adequate data model, the current study employs pooled data technique and panel data analysis which is usually estimated by either fixed effect technique or random effects technique.  The following sections provide a brief discussion on the econometrics techniques that the current study uses to estimate the empirical models. 3.3.1 Pooled Ordinary Least Square (OLS) technique All the models used in the study have been tested by the pooled data analysis technique. The pooled data is the data that contains pooling of time series and cross-sectional observations (combination of time series and cross-section data) (Gujarati, 2003). The pooled data analysis has many advantages over the pure time series or pure cross sectional data. It generates more informative data, more variability, less collinearity among variables, more degrees of freedom, and more efficiency (Gujarati, 2003). The underlying assumption behind the pooled analysis is that, the intercept value and the coefficients of all the explanatory variables are the same for all the firms, as well as they are constant over time (no specific time or individual aspects). It also assumes that the error term captures the differences between the firms (across-sectional units) over the time. However, (Gujarati, 2003) has pointed out that these assumptions are highly restrictive. He argues that although of it is simplicity and advantages, the pooled regression may distort the true picture of the relationship between the dependent and independent variables across the firms. Pooled model will be simply estimated by Ordinary Least Square (OLS). However, OLS will be appropriate if no individual (firm) or time specific effects exist. If they exist, the unobserved effects of unobserved individual and time specific factors on dependent variable can be accommodated by using one of the panel data techniques.   According to (Gujarati, 2003) panel data is a special form of pooled data in which the same cross-sectional unit is surveyed over time. It helps researchers to substantially minimize the problems that arise when there is an omitted variables problems such as time and individual-specific variables and to provide robust parameter estimates than time series and (or) cross sectional data. All the empirical models that have been tested by using pooled data analysis and tested again on the basis of panel data analysis techniques (Fixed Effects and Random Effects).   3.3.2 The fixed effects model (FEM) Fixed effects technique allows control for unobserved heterogeneity which describes individual specific effects not captured by observed variables. According to Gujarati (2003) the fixed effect model takes into account the specific effect of each firm â€Å"the individuality† by allowing the intercept vary across individuals (firms), but each individual’s intercept does not vary over time. However, it still assumes that the slope coefficients are constant across individuals or over time. Two methods used to control for the unobserved fixed effects within the fixed effects model; the first differences and Least Square Dummy variables (LSDV) methods.  For the purposes of the current study, (LSDV) was used where; two sets of dummy variables (industry, and year dummy variables). The additional dummy variables control for variables that are constant across firms but change over time. Therefore, the combine time and individual (firm) fixed effects model eliminates the omitted variables bias arising both from unobserved factors that are constant over time and unobserved factors that are constant across firms. However, fixed effects model consumes the degrees of freedom, if estimated by the Least Square Dummy Variable (LSDV) method and, too many dummy variables are introduced (Gujarati, 2003). Furthermore, with too many variables used as regressors in the models, there is the possibility of multicollinearity. It is worth noting that OLS technique used in estimating fixed effects model. 3.3.3 The Random Effects Model (REM) By contrast, fixed effects model, the unobserved effects in random effects model is captured by the error term (ÃŽ µit) consisting of an individual specific one (ui) and an overall component (vit) which is the combined time series and cross-section error. Moreover, it treats the intercept coefficient as a random variable with a mean value (ÃŽ ±0) of all cross-sectional (firms) intercepts and the error component represents the random deviation of individual intercept from this mean value (Gujarati, 2003). Consequently, the individual differences in the intercept values of each firm are reflected in the error term (ui). On the other hand, the Generalized Least Square (GLS) used in estimating random affects model.  This is because the GLS technique takes into account the different correlation structure of the error term in the Random Effect Model (REM) (Gujarati, 2003). 3.3.4 Statistical specification tests The study uses three specification tests to identify which empirical method is the best. These tests are used for testing the fixed effect model versus the pooled model (F-statistics), the random effect model versus pooled model (Lagrange Multiplier test) (LM), and the fixed effect model versus the random effect model (Hausman test). The following sub-sections offer brief disc