|Table of Contents|

[1] Zou Yang, Zeng Xiaoqin, Han Xiuqing, Zhang Kang, et al. Context-attributed graph grammar frameworkfor specifying visual languages [J]. Journal of Southeast University (English Edition), 2008, 24 (4): 455-461. [doi:10.3969/j.issn.1003-7985.2008.04.012]
Copy

Context-attributed graph grammar frameworkfor specifying visual languages()
一个描述可视化语言上下文属性化的图文法框架
Share:

Journal of Southeast University (English Edition)[ISSN:1003-7985/CN:32-1325/N]

Volumn:
24
Issue:
2008 4
Page:
455-461
Research Field:
Computer Science and Engineering
Publishing date:
2008-12-30

Info

Title:
Context-attributed graph grammar frameworkfor specifying visual languages
一个描述可视化语言上下文属性化的图文法框架
Author(s):
Zou Yang1 Zeng Xiaoqin1 Han Xiuqing1 Zhang Kang2
1College of Computer and Information Engineering, Hohai University, Nanjing 210098, China
2Department of Computer Science, University of Texas at Dallas, Texas 75080-3021, USA
邹阳1 曾晓勤1 韩秀清1 张康2
1河海大学计算机及信息工程学院, 南京 210098; 2德克萨斯大学达拉斯分校计算机科学系, 美国德克萨斯 75080-3021
Keywords:
visual language graph grammar context-attributed parsing confluence
可视化语言 图文法 上下文属性化 语法分析 合流
PACS:
TP301.2
DOI:
10.3969/j.issn.1003-7985.2008.04.012
Abstract:
Since the specifications of most of the existing context-sensitive graph grammars tend to be either too intricate or not intuitive, a novel context-sensitive graph grammar formalism, called context-attributed graph grammar(CAGG), is proposed.In order to resolve the embedding problem, context information of a graph production in the CAGG is represented in the form of context attributes of the nodes involved.Moreover, several properties of a set of confluent CAGG productions are characterized, and then an algorithm based on them is developed to decide whether or not a set of productions is confluent, which provides the foundation for the design of efficient parsing algorithms.It can also be shown through the comparison of CAGG with several typical context-sensitive graph grammars that CAGG is more succinct and, at the same time, more intuitive than the others, making it more suitably and effortlessly applicable to the specification of visual languages.
针对目前已有的上下文相关图文法的描述规范过于复杂或不太直观, 提出了一个新的上下文相关图文法的形式框架:上下文属性化的图文法CAGG.该文法将产生式的上下文信息刻画成相关结点的上下文属性来解决嵌入问题.而且进一步分析了合流的CAGG产生式集合的基本特征, 并基于此设计了合流产生式集合的判定算法, 从而为构造高效的语法分析算法奠定了基础.通过与已有上下文相关图文法的对比分析可知, CAGG图文法的形式更为简洁和直观, 因而更适于且更易于应用到可视化语言描述领域.

References:

[1] Ferrucci F, Tortora G, Tucci M, et al.A predictive parser for visual languages specified by relation grammars [C]//Proc IEEE Symposium on Visual Languages.St.Louis, Missouri, USA, 1994:245-252.
[2] Wittenburg K, Weitzman L, Talley J.Unification-based grammars and tabular parsing for graphical languages [J].Journal of Visual Languages and Computing, 1991, 2(4):347-370.
[3] Marriott K.Constraint multiset grammars [C]//Proc IEEE Symposium on Visual Languages.St.Louis, Missouri, USA, 1994:118-125.
[4] Rozenberg G.Handbook on graph grammars and computing by graph transformation:foundations[M].World Scientific, 1997:1-94.
[5] Drewes F, Hoffmann B, Janssens D, et al.Adaptive star grammars [C]//ICGT2006, LNCS 4178.Berlin:Springer, 2006:77-91.
[6] Rekers J, Schürr A.Defining and parsing visual languages with layered graph grammars [J].Journal of Visual Languages and Computing, 1997, 8(1):27-55.
[7] Bottoni P, Taentzer G, Schürr A.Efficient parsing of visual languages based on critical pair analysis and contextual layered graph transformation [C]//Proc IEEE Symposium on Visual Languages.Seattle, Washington, USA, 2000:59-60.
[8] Kong J, Zhang K.Parsing spatial graph grammars [C]//Proc IEEE Symposium on Visual Languages and Human-Centric Computing.Rome, Italy, 2004:99-101.
[9] Minas M.Parsing of adaptive star grammars [C]//Proc 2nd International Workshop on Graph and Model Transformation.Brighton, UK, 2006, 4:1-15.
[10] Zhang D Q, Zhang K, Cao J.A context-sensitive graph grammar formalism for the specification of visual languages [J].The Computer Journal, 2001, 44(3):187-200.
[11] Zeng X, Zhang K, Kong J, et al.RGG+:an enhancement to the reserved graph grammar formalism [C]//Proc IEEE Symposium on Visual Languages and Human-Centric Computing. Dallas, TX, USA, 2005:272-274.

Memo

Memo:
Biography: Zou Yang(1976—), male, lecturer, yzou@hhu.edu.cn.
Foundation items: The National Natural Science Foundation of China(No.60571048, 60673186, 60736015), the National High Technology Research and Development Program of China(863 Program)(No.2007AA01Z178).
Citation: Zou Yang, Zeng Xiaoqin, Han Xiuqing, et al.Context-attributed graph grammar framework for specifying visual languages[J].Journal of Southeast University(English Edition), 2008, 24(4):455-461.
Last Update: 2008-12-20