Apr 18, 2018 - Drivers Sangha Lvt 0025. DB:8.32:Drivers Hub Usb Sangha Modle Lvt-127b 77. Pas de drivers sur le site Sangha. DB:8.32:Drivers Hub Usb.
Luxury Flooring have become one of the largest online flooring specialists across the whole of the UK within just 5 years of trading, simply because of our prices and the products we have on display. We don’t just retail flooring online, we offer a wide range of technical and product advice which can be found online or explained in more detail over the phone. When purchasing wood flooring online, we feel it is necessary for our clients to have confidence when purchasing, so we offer free speedy samples allowing you to touch and feel the goods prior to ordering.
EVENT HANDLING FOR ARCHITECTURAL EVENTS AT HIGH PRIVILEGE LEVELS BACKGROUND 0001 The present disclosure generally relates to the field of electronics. More particularly, an embodiment of the invention relates to techniques for controlling flow after occurrence of architectural events at high privilege levels in a processor. 0002 Various mechanisms may be used to change the flow of control (such as the processing path or instruction sequence being followed) in a processor. For example, an interrupt may be used to change the flow of control in a processor. Generally, an interrupt may be triggered by an external interrupt signal provided to a processor. The processor may respond to the interrupt by jumping to an interrupt handler routine. In some cases, interrupts may be masked by the operating system executing at a supervisor privilege level, such that a software program executing at a relatively lower privilege level than the operating system may have no opportunity to modify such control flow changing events without modifying the operating system (OS).
BRIEF DESCRIPTION OF THE DRAWINGS 0003 The detailed description is provided with reference to the accompanying figures. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The use of the same reference numbers in different figures indicates similar or identical items. 1, 6, and 7 illustrate block diagrams of embodiments of computing systems, which may be utilized to implement various embodiments discussed herein. 2 illustrates a block diagram of portions of a processor core and other components of a computing system, according to an embodiment of the invention.
3 and 4 illustrate portions of various types of data, according to various embodiments. 5 illustrates a flow diagram of a method to generate an interrupt in response to occurrence of a yield event, according to an embodiment. DETAILED DESCRIPTION 0008 In the following description, numerous specific details are set forth in order to provide a thorough understanding of various embodiments. However, various embodiments of the invention may be practiced without the specific details. In other instances, well-known methods, procedures, components, and circuits have not been described in detail so as not to obscure the particular embodiments of the invention. Further, various aspects of embodiments of the invention may be performed using various mechanisms, such as integrated semiconductor circuits ('hardware'), computer-readable instructions organized into one or more programs ('software'), or some combination of hardware and software.
For the purposes of this disclosure reference to 'logic' shall mean either hardware, software, or some combination thereof. 0009 Some of the embodiments discussed herein may be utilized to perform event handling operations.
In an embodiment, an 'event' refers to a condition that may or may not require some action to be taken by logic. Furthermore, events may be classified into different types based on the action that is to be taken. For example, certain exceptions (such as divide by zero) may be characterized as synchronous events that occur each time a corresponding instruction is executed.
On the other hand, interrupts that are generated by external devices may be characterized as asynchronous events, in part, because they may occur at any time. In one embodiment, an 'architectural event' refers to an event or condition that may be monitored (e.g., by programming information corresponding to the architectural event into a state (e.g., such as a channel discussed with reference to Fig. In an embodiment, software may configure a channel to monitor certain architectural events which may not otherwise be observable by software and/or hardware. For example, a last level cache miss may be defined as an architectural event that is used to perform dynamic profile guided optimizations.
Also, an architectural event may be defined to monitor conditions that are occurring on a co-processor that is located on the same integrated circuit chip as a processor. In an embodiment, an 'architectural event' may generally refer to an event or condition that occurs within processing resources or other logic present on the same integrated circuit chip as a processor. 0010 In one embodiment, after an event (such as an architectural event occurs) occurs at a privilege level higher than a user privilege level (e.g., a highest privilege level that may also be referred to as a privilege level 0 or supervisor privilege level), the corresponding occurrence response (e.g., a yield event) may cause generation of an interrupt. In an embodiment, the term 'privilege level' refers to an attribute associated with the execution mode that determines which operations are allowed and which are disallowed.
For example, application programs may be executed at a privilege level (e.g., a user privilege level) that does not allow the application programs to interfere with system state or otherwise to execute instructions that interfere with system state. In some embodiments, the operating system may execute at a supervisor privilege level, e.g., to manipulate system state. Further, a high privilege level (such as a privilege level higher than a user privilege level) may allow operating system software to safeguard system state such that application programs executing at a lower privilege level are disallowed from manipulating system state. Additionally, some embodiments may enable handling of events at a privilege level that is higher than a user privilege level, e.g., without requiring changes to an operating system or other software executing at a supervisor privilege level (such as a device driver). In some embodiments, generation of an interrupt (e.g., corresponding to the occurrence response, such as a yield event) for a high privilege level (e.g., a supervisor privilege level) may provide a relatively easier migration path that may reduce the impact on changes to the operation system code that executes in supervisor privilege level, in part, because supervisor privilege level software may be already aware of how to deal with pre-emption due to interrupts.
0011 In an embodiment, various logic provided in a processor may be used to perform event handling tasks, such as the processors discussed with reference to Figs. 1, 2, 6, and 7. More particularly, Fig. 1 illustrates a block diagram of a computing system 100, according to an embodiment of the invention. The system 100 may include one or more processors 102-1 through 102-N (generally referred to herein as 'processors 102' or 'processor 102'). The processors 102 may communicate via an interconnection network or bus 104.
Each processor may include various components some of which are only discussed with reference to processor 102-1 for clarity. Accordingly, each of the remaining processors 102-2 through 102-N may include the same or similar components discussed with reference to the processor 102-1. 0012 In an embodiment, the processor 102-1 may include one or more processor cores 106-1 through 106-M (referred to herein as 'cores 106' or more generally as 'core 106'), a shared cache 108, and/or a router 110. The processor cores 106 may be implemented on a single integrated circuit (IC) chip. Moreover, the chip may include one or more shared caches (such as cache 108) and/or private caches (such as level 1 (Ll) cache 111-1, generally referred to herein as 'Ll cache 111'), buses or interconnections (such as a bus or interconnection network 112), memory controllers (such as those discussed with reference to Figs. 6 and 7), or other components. 0013 In one embodiment, the router 110 may be used to communicate between various components of the processor 102-1 and/or system 100.
Moreover, the processor 102-1 may include more than one router 110. Furthermore, the multitude of routers (110) may be in communication to enable data routing between various components inside or outside of the processor 102-1. 0014 The shared cache 108 may store data (e.g., including instructions) that are utilized by one or more components of the processor 102-1, such as the cores 106. For example, the shared cache 108 may locally cache data stored in a memory 114 for faster access by components of the processor 102. In an embodiment, the cache 108 may include a mid-level cache (such as a level 2 (L2), a level 3 (L3), a level 4 (L4), or other levels of cache), a last level cache (LLC), and/or combinations thereof. Moreover, various components of the processor 102-1 may communicate with the shared cache 108 directly, through a bus (e.g., the bus 112), and/or a memory controller or hub. As shown in Fig.
1, event handling data 120 may be stored in the memory 114 (or an interrupt controller as will be further discussed with reference to Fig. Moreover, the event handling data 120 may be utilized by a component of the core 106 to generate an interrupt in response to an event occurrence, as will be further discussed herein, for example, with reference to Figs. 2 illustrates a block diagram of portions of a processor core 106 and other components of a computing system, according to an embodiment of the invention.
In one embodiment, the arrows shown in Fig. 2 illustrate the flow direction of instructions through the core 106. One or more processor cores (such as the processor core 106) may be implemented on a single integrated circuit chip (or die) such as discussed with reference to Fig. Moreover, the chip may include one or more shared and/or private caches (e.g., cache 108 of Fig. 1), interconnections (e.g., interconnections 104 and/or 112 of Fig. 1), memory controllers, or other components. 0016 As illustrated in Fig.
2, the processor core 106 may include a fetch unit 202 to fetch instructions for execution by the core 106. The instructions may be fetched from any storage devices such as the memory 114 and/or the memory devices discussed with reference to Figs. The core 106 may also include a decode unit 204 to decode the fetched instruction. For instance, the decode unit 204 may decode the fetched instruction into a plurality of uops (micro-operations). Additionally, the core 106 may include a schedule unit 206. The schedule unit 206 may perform various operations associated with storing decoded instructions (e.g., received from the decode unit 204) until the instructions are ready for dispatch, e.g., until all source values of a decoded instruction become available.
In one embodiment, the schedule unit 206 may schedule and/or issue (or dispatch) decoded instructions to an execution unit 208 for execution. The execution unit 208 may execute the dispatched instructions after they are decoded (e.g., by the decode unit 204) and dispatched (e.g., by the schedule unit 206). In an embodiment, the execution unit 208 may include more than one execution unit, such as a memory execution unit, an integer execution unit, a floating-point execution unit, or other execution units. The execution unit 208 may also perform various arithmetic operations such as addition, subtraction, multiplication, and/or division, and may include one or more an arithmetic logic units (ALUs).
In an embodiment, a co-processor (not shown) may perform various arithmetic operations in conjunction with the execution unit 208. 0017 Further, the execution unit 208 may execute instructions out-of- order. Hence, the processor core 106 may be an out-of-order processor core in one embodiment.
The core 106 may also include a retirement unit 210. The retirement unit 210 may retire executed instructions after they are committed. In an embodiment, retirement of the executed instructions may result in processor state being committed from the execution of the instructions, physical registers used by the instructions being de-allocated, etc. 0018 The core 106 may additionally include a trace cache or microcode read-only memory (uROM) 212 to store microcode and/or traces of instructions that have been fetched (e.g., by the fetch unit 202). The microcode stored in the uROM 212 may be used to configure various hardware components of the core 106.
In an embodiment, the microcode stored in the uROM 212 may be loaded from another component in communication with the processor core 106, such as a computer-readable medium or other storage device discussed with reference to Figs. The core 106 may also include a bus unit 214 to enable communication between components of the processor core 106 and other components (such as the components discussed with reference to Fig. 1) via one or more buses (e.g., buses 104 and/or 112). The core 106 may additionally include one or more registers 216 to store data accessed by various components of the core 106.
0019 Additionally, the processor core 106 illustrated in Fig. 1 may include one or more channels 218 that correspond to a set of architecture states. Each privilege level (such as privilege level 0 or supervisor privilege level (e.g., the highest privilege level), privilege level 3 (e.g., a relatively lower privilege level that may correspond to a user level privilege in an embodiment), etc.) may have a corresponding channel. Further, each channel 218 may correspond to one or more scenarios and corresponding yield events. In an embodiment, the channels 218 may contain scenario specifications. In turn, a yield event may be signaled when the scenario associated with the channel triggers.
Hence, a yield event may be the occurrence response to a scenario. 0020 Furthermore, the core 106 may include an event monitoring logic 220, e.g., to monitor the occurrence of one or more events that may be associated with architecturally defined scenarios (e.g., in the channel(s) 218) that may be used to trigger a corresponding yield event. As shown in Fig. 2, the logic 220 may be provided within the execution unit 208. However, the logic 220 may be provided elsewhere in the processor core 106. As will be further discussed herein, e.g., with reference to Figs.
3-5, the logic 220 may generate a signal after a monitored event occurs and a yield conversion logic 221 may in response to the generated signal cause generation of an interrupt, e.g., based on data stored in the channels 218. For example, the events that are being monitored (e.g., with reference to data stored in the channels 218) may occur asynchronously with respect to the execution of the current instruction sequence on the processor core 106.
0021 Moreover, as shown in Fig. 2, the event handling data 120 may be stored (or cached) in one or more of the caches 111 and/or 108, instead of or in addition to the memory 114. The memory 114 may also store one or more: interrupt service routines 222 (e.g., that may be triggered in response to an interrupt that is generated in response to a yield event by the logic 220), operating systems 224 (e.g., to manage hardware or software resources of a computing system that includes the core 106), and/or device drivers 225 (e.g., to enable communication between the OS 224 and various devices such as those discussed with reference to Figs. In one embodiment, after the logic 221 causes generation of an interrupt (e.g., corresponding to a yield event), the address of an interrupt service routine (222) may be obtained from the event handling data 120 (which may be stored in an interrupt descriptor table in some embodiments).
0022 In an embodiment, an event may be handled by one or more of the interrupt service routines 222 (e.g., which may also be cached in the caches 111 and/or 108 in various embodiments) that is invoked to complete handling of the event. Since invoking the routine 222 may cause preemption, e.g., due to the asynchronous nature of events, the routine 222 may execute in an arbitrary thread context and may effect how the code that executes in the context of the current thread accesses data structures, uses locks, and interacts with other threads executing on the processor core. Moreover, software executing in supervisor privilege level may be generally carefully designed to avoid potential issues due to preemption including, for example, putting restrictions on the locks that may be acquired and interactions with other software components. Accordingly, in one embodiment, instead of introducing a new source of preemption due to the need to handle the monitored events which are asynchronous in nature and add software support for this, an existing interruption mechanism may be used so that when an event that is being monitored (e.g., with reference to data stored in the channels 218) occurs, it results in an interrupt being generated (e.g., by the logic 221) and the routines 222 may be, in turn, invoked. 3 illustrates a block diagram of portions of data stored in the channels 218 of Fig. 2 for supervisor privilege level, according to an embodiment.
The channels 218 may store data including, for example, one or more entries 302. Each entry 302 may include one or more of: a channel identifier (ID) field 304 (e.g., that correspond to one of the channels 218 of Fig. 2), a yield interrupt enable field 306 (e.g., to indicate whether a corresponding yield is enabled to trigger an interrupt), a scenario identifier field 308 (e.g., to identify a scenario), a scenario status field 310 (e.g., to indicate the occurrence of the scenario identified by field 308), and an entry valid field 312 (e.g., to indicate whether the corresponding entry is valid which may enable or disable a channel identified by the field 304).
In an embodiment, each channel 218 of Fig. 2 may contain scenario specification where a scenario corresponds to one or more architectural events and is identified by a scenario identifier (308). In one embodiment, the list of scenario identifiers may be enumerable by making use of an instruction (such as CPUID instruction, in accordance with at least one instruction set architecture). Also, in one embodiment, the data corresponding to the channels 218 may be stored in hardware registers (e.g., including registers 216). Further details regarding usage of the fields 304-312 will be discussed with reference to Fig. 4 illustrates a block diagram of portions of the event handling data 120 of Figs.
1-2, according to an embodiment. In some embodiments, event handling data 120 shown in Fig. 4 may be shared with data corresponding to event monitoring interrupts.
For example, in some embodiments, a local vector table (LVT) used for a performance monitor interrupt mechanism may be modified such as shown in Fig. More particularly, Fig. 4 illustrates various entries that may correspond to a modified LVT, according to one embodiment. As shown in Fig. 4, the event handling data 120 may include one or more entries 402. The system of claim 16, further comprising an audio device coupled to the memory.